Jump to content

Support of Helios4 - Intro


gprovost

Recommended Posts

14 hours ago, tkaiser said:

So what are the 2GB used for?)

Run Nextcloud/Owncloud/Seafile directly from the device. Or use the rest of memory to run Nginx proxy, LXC containers (if possible on ARM) and/or Docker :)

Would be really good if virtualization was supported to run Pfsense. Also FreeNAS, but might need more memory for ZFS i guess.

Link to comment
Share on other sites

Hi Guys,

 

That's nice to see the effervescence around Helios4. We have a pretty good start for the first 48hours, we are very optimistic.

 

Thanks @tkaiser for being so proactive and already answering most people questions. I saw you were very active on CNX ;-) I have so many question to reply through KS, it's hard to catch-up.

 

I take note of all your feedback. Regarding the HDD vibration absorbers and the filter screen, it's something I had already in mind but haven't found yet the right solution. Take note that the thickness of the acrylic plate is much ticker than usual PC metal case, so standard absorber won't fit. If I can find the right solution we might put these as part of the kit by default.

 

Haha i see someone really wants its ECC RAM, but yes agree It would be a great stretch goal.

We were also planning in the stretch goal list to include a OLED i2c screen.

 

But well, first we need to see how the campaign progress in the next few days before dreaming too big.

 

Cheers.

Link to comment
Share on other sites

Just wanted to post an update to announce that we launched a Helios4 Lucky Draw to help us get some attention.

Simply enlist to WIN one of the following prizes:

1x Helios4 Full Kit 2GB with a bundle of 4 HDD WD Red 4TB
1x Bundle of 2 HDD WD Red 4TB
1x Helios4 Full Kit 2GB
4x USB 3.0 External HDD WD Elements 2TB
4x USB 3.0 Memory Stick Sandisk 128GB

Anyone can register to the competition: win.kobol.io/lp/29509/Helios4

Link to comment
Share on other sites

Hi All,

A little update that might satisfy some of you @tkaiser

We have just introduced a new KIT with 2GB of ECC memory.

We have received quite a lot of demands for an ECC option, while it was quite a premium feature (cost wise) we finally decided to offer the option for a little extra money.

Only 10 days remaining to snatch Helios4. We need your support guys.

http://goo.gl/ECMtQ5

Cheers.

Link to comment
Share on other sites

Hi All, Hi @tkaiser

 

Today we are running a new pre-order campaign on: shop.kobol.io

 

To make this new campaign possible we have revisited the goal to 300 units, with a simplify offer that focuses on a single option: Helios4 Full Kit - 2GB ECC @ 195USD.

To encourage this ECC upgrade and show our appreciation we also wanted to give 10$ off to our initial backers who believed in this project. So ONLY For the next 2 weeks use your discount code : KSBACKER10 and enjoy 10$ off.

We’re also happy to announce that we’ve addressed the following challenges:

  • No more confusing currency, everything is in USD
  • With Easyship, we managed to reduce the shipping fees

The Kobol Team [ kobol.io/helios4 ]

Screenshot from 2017-06-20 15:04:55.png

Link to comment
Share on other sites

Only 12 missing: https://shop.kobol.io

 

To be honest: that's surprising. I would've never thought only providing the most expensive eMMC ECC DRAM option would work. Fortunately I was wrong :)

 

How many more days is the $10 coupon valid?

 

(BTW: You should improve a little bit on getting more attention. Really)

 

 

Link to comment
Share on other sites

@tkaiser The discount code (KSBACKER10) is valid till July 3rd.

 

Like you, we were surprised to see such positive response to the ECC model, but it looks like the majority of our backers understood the added value and how it was differentiating even more the Helios4 from proprietary entry level NAS. So thank you to have suggested (many times) this upgrade ;-)

 

Regarding 'getting more attention', it isn't that trivial unfortunately... we think that once the first Helios4 batch is received and used by our backers it will improve our exposure.

Link to comment
Share on other sites

@Tido I just added a share button for Google+

 

@wsian Even though it was the intention in the early days of the project, the OLED display is not part of the Kit. But you can order one on Aliexpress for less than $10. .Just choose any 1.3 inches OLED i2c screen with IC SSD1306. I will give more information about this topic later on. Perhaps if we find the time we might even offer it on our website as an add-on to Helios4 for a slightly cheaper price than what Aliexpress can offer.

Link to comment
Share on other sites

Edit: 'Benchmarking gone wrong' warning: by accident I missed iozone's -I flag below ('Use DIRECT IO if possible for all file operations. Tells the filesystem that all operations to the file are to bypass the buffer cache and go directly to disk') so all read numbers with 100MB test size are wrong since coming from DRAM and not disk)

 

Since I'm currently playing around with ZFS and have a '3 different SSDs attached to different SATA controllers' setup on my Clearfog Pro today I thought let's try out RAID-5 mdraid performance (I personally would never use RAID5 any more since it's an anachronistic concept from last century and not up to the task any more but since most Helios4 users will love it... why not giving it a try):

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4   130912   161095   783497   781714   707505   139759
          102400      16   147384   161478   939561   871600   894362   155135
          102400     512   140284   151258   819534   784290   837237   140587
          102400    1024   134588   148516   757065   687973   761542   127707
          102400   16384   129696   150554   645589   675420   728091   134269
         2048000       4   146697   113501   522420   542454    33662    46919
         2048000   16384   123967   113916   441318   477279   460094   137773

That's the output from our usual 'iozone -e -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2' and then another 'iozone -e -a -s 2000M -r 4k -r 16384k -i 0 -i 1 -i 2' to test with a filesize twice DRAM size.

 

It should be noted that especially the last test was limited by one of my SSDs (the Intel 540 slows down to ~60MB/s sequential write performance after approx 1 GB of data has been written that explains RAID5 performance with 3 disks being limited to 'slowest device * 2') so with a bunch of decent HDDs instead the sequential write performance could be around 180MB/s. I let 'iostat 3' run in parallel to check CPU utilization, at least with (very) small block sizes Clearfog Pro is bottlenecked by CPU utilization on the Clearfog Pro (Armada 38x clocked with 1600 MHz). %system utilization was partially above 75% but I don't know whether Marvell's RAID5/6 XOR engine was already loaded or not (a module called async_xor is loaded but I don't know whether that already makes use of HW acceleration)

 

It should also be noted that while creating the RAID5 array SATA errors happened: http://sprunge.us/ciKT (ata6 is the 2nd port on the ASM1062 PCIe controller). With Helios4 there shouldn't be any issues since there all 4 SATA ports are directly provided by Armada 38x (I have 2 SERDES lanes configured as PCIe with an ASM1062 and a Marvell 88SE9215 used).

 

What do the above results mean for Helios4 with its 4 x SATA configuration? Since there we're talking about a single Gigabit Ethernet 'bottleneck' RAID5 storage performance will be sufficient anyway (+100 MB/s).

Link to comment
Share on other sites

Here some fresh benchmarks with Helios4 configured as RAID6, then as RAID5 with 4x HDDs Western Digital RED 2TB.

 

Note : This is also a 1GB RAM version clocked at 1.6GHz.

 

RAID 6

md0 : active raid6 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      39028736 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] [UUUU]

 

With buffer cache

iozone -e -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2

                                                              random    random                                   
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4   112166   182202   878938   897663   720385    46844                                                          
          102400      16   126067   193562   907704  1068469  1001711    98934                                                          
          102400     512   118358   182862   985050  1006993  1001779   170459                                                          
          102400    1024   117394   176268   875566   888008   889065   169862                                                          
          102400   16384   114508   169885   818170   817925   827517   172793                                                      

 

With DIRECT IO

iozone -e -a -I -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2

                                                              random    random                                   
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4     5830     5826    34179    37669     2839      602                                                          
          102400      16    12445    19215    66829    71497    10074     2198                                                          
          102400     512   106759   116574   199069   281008    61111   124175                                                          
          102400    1024   129763   105815   173525   237719    96948   139462                                                          
          102400   16384   177544   171003   284953   315916   286591   164939 

 

RAID5

md0 : active raid5 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      58543104 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

 

With buffer cache

iozone -e -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2

                                                              random    random                                  
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4   115183   170676   904434   895645   722963    51570                                                          
          102400      16   119682   172369  1063885  1084722  1002759   116063                                                          
          102400     512   123952   201012   989143   992372  1001279   148636                                                          
          102400    1024   116065   197683   874443   885162   886703   198021                                                          
          102400   16384   129151   199118   830783   845355   838656   180389

 

With DIRECT IO

iozone -e -a -I -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2

                                                              random    random                                  
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4     9552    10938    36978    39222    28551      530                                                          
          102400      16    11185    20621    64135    77590    33906     2159                                                          
          102400     512    40443    51534    96495   533923   536836    43272                                                          
          102400    1024    84312    78308   122916   486366   440706    83712                                                          
          102400   16384   196534   207664   598961   597055   619819   206123 

 

 

On small block operation, my setup shows CPU utilization 50% and memory usage 50%.

 

I know there is no obvious way to say if the system offload on the hardware XOR engine. Check your kernel message you should see the kernel has detected 2 XOR engine.

[    0.737452] mv_xor f1060800.xor: Marvell shared XOR driver
[    0.776401] mv_xor f1060800.xor: Marvell XOR (Descriptor Mode): ( xor cpy intr pq )
[    0.776570] mv_xor f1060900.xor: Marvell shared XOR driver
[    0.816397] mv_xor f1060900.xor: Marvell XOR (Descriptor Mode): ( xor cpy intr pq )

In the current armbian mvebu kernel config, mv_xor driver is part of the kernel, not a module.

 

I'm still trying to figure out a more concrete way to confirm that the system offload on hardware engine, but based on benchmark it does.

 

 

IMG_0840.thumb.jpg.ccd9bf281e9590840456fd685294cbc4.jpg

 

Link to comment
Share on other sites

On 5/11/2017 at 4:18 PM, tkaiser said:

 

So what are the 2GB used for?)

 

 

I have a real case scenario for this.  Right now, I am using a BPI-R2 for running OpenKM.  Have been difficult, but I have it running very well now (without antivirus support that always kill the machine).  It is a very nice document management combination.  In fact, OpenKM recommends 4GB on a 64 bit platform (dual core), but the BPI-R2 with 2GB and 32 bit (4 core) CPU works well.  It seems the Helios4 could be a variation base when I need to work bigger files, and to have 4GB could help me to work with more concurrent users.

There are more options that only work with regular storage.

Link to comment
Share on other sites

2 hours ago, malvcr said:

I have it running very well now (without antivirus support that always kill the machine)

 

Somewhat unrelated but I really hope you're aware that with most recent Armbian release you're able to do MASSIVE memory overcommitments and performance will just slightly suck?

 

I'm running some SBC for test purposes as well as some productive virtualized x86 servers since some time with 300% memory overcommitment. No problems whatsoever.

Link to comment
Share on other sites

Thanks tkaiser .... that zRAM it is extremely important.

 

The overcommit is a real problem, but the security one is bigger one.

 

Be on the R2, the Helios4 or any of these SBC machines, sometimes it is necessary to define SWAP to be able to run certain tasks.  Maybe because they are heavy, or in other cases, because you have different users performing several independent tasks.  However, no matter how you try to protect sensitive data, the SWAP will have it on disk without any type of protection.  Then, the alternative is to encrypt the SWAP, and this have an extra burden because the constant cyphering and de-cyphering process, together with the disk bottleneck.

 

When using RAM, you eliminate the encryption need (when the machine is off, the data disappear), and you also quit the disk latency.  And, of course, compression it is less heavy than encryption (the one deserves to be used).  So, comparing them must be a win-win scenario.

 

Right now I can't use it because the R2 I am working with it is ready to production.  But I need to work the next one, and I will add the btrfs and zRAM.  I can't use Armbian in the R2 because it is not yet supported, although I am using it with an BPI-M2+ included inside the appliance (that gives me peace of mind).  For the R2 I have an Ubuntu 18.04 and a special 4.14.71 Kernel worked by frank-w (the name in Banana Forum) with some extra definitions to use AF_ALG, oh ... I can even read the temperature :-) ... 

For not to be very out of the thread theme, I think that the Helios4 must have a natural access to zRAM and all these nice "power" features.  In fact, I don't know how to enable it from the Armbian point of view.  Could be an extra option in the armbian-config tool?

Link to comment
Share on other sites

35 minutes ago, tkaiser said:

 

Sorry, you lost me here. I'm not talking about stupid things like swapping to disk but ZRAM! Too much time wasted dealing with all this madness already. I need a break.

 

Oh ... maybe it is my way of writing English.  Excuse me.

 

Right now, without the zRAM, and because of the particular restrictions in the R2 and the requirements of OpenKM, I need to add SWAP.  And I have no intentions to use the eMMC for that.  So, currently, I am swapping to hard disk and because of the security requirements, I need to encrypt the SWAP partition because (the SWAP will have the sensitive data in the disk soon or later).  This is my current situation before knowing about this option.  I agree 100% it is not the best setup.

 

So ... the zRAM it is very nice, a much better option.  I will use it to increase the R2 capacity on this particular application, and to eliminate the DISK SWAP because ... simply ... it has no sense ( comparing encrypted swap disk with zram ).  Will be faster, more secure, and simpler to configure.

 

And, as the Helios4 also have 2GB of RAM, if I use that machine, I also need to use the zRAM.  In fact, a machine with less that 2GB of RAM for this type of application it is useless.

Link to comment
Share on other sites

@malvcr This is going quite off topic and not hardware specific anymore. You need to create a dedicated thread if you want to discuss further your zram / swap topic.

 

FYI zram module is available on Armbian (therefore available on Helios4).

You can easily built your own startup script that setup swap on zram or if you using Ubuntu bionic you could also use the zram-config package.

Actually swap on zram is already configured by default in Armbian since few releases.

Link to comment
Share on other sites

52 minutes ago, gprovost said:

Actually swap on zram is already configured by default in Armbian since few releases.


And ZRAM fine-tuning is coming with future updates.

 

Just now, malvcr said:

Understood ... thanks ...

 


No need to open a new topic. Here: 

 

Link to comment
Share on other sites

10 hours ago, Igor said:


And ZRAM fine-tuning is coming with future updates.

 


No need to open a new topic. Here: 

 

Perfect ... I updated my M2+ and the zRAM is there ... I will jump to the right topic.

 

Thanks.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines