fr33s7yl0r Posted January 16, 2021 Posted January 16, 2021 I literally registered to post this complaint here. How STUPID it is to have a M.2 SATA SHARED WITH HDD1!! IT DEFEATS THE WHOLE PURPOSE OF THIS ENCLOSURE. PERIOD!!!! My mistake was that I did not know what it meant that M.2 was SHARED before I purchased this enclosure and now, I have a need to expand beyond the measly 16GB of your eMMC and I found out THE PAINFUL WAY that IT IS NOT POSSIBLE because of the shared SATA port. IT"S BEYOND FRUSTRATION. YOU CANNOT EVEN IMAGINE HOW STUPID THIS DECISION OF YOURS IS. IT JUST LITERALLY DOWNGRADED YOUR NAS FROM AN EXCELLENT ENCLOSURE TO AN OVERPRICED JUNK THAT I HAD TO WAIT FOR FOR MORE THAN 6 MONTHS. AND THERE IS NO WAY TO RETURN IT TO YOU AND GET MY MONEY BACK. What should I do with my RAID now, once I want to install an SSD drive? Should I now use RAID drive as a slow storage and have symbolic links to make sure that I don't run out of space on eMMC? IF THERE WAS SOME WAY TO RATE YOUR ENCLOSURE SO NOBODY ELSE MAKES THE SAME MISTAKE I DID, I WOULD CERTAINLY DO THAT. 1
tommitytom Posted January 16, 2021 Posted January 16, 2021 Seems like the mistake is yours. It's made pretty clear that the port is shared (it's even mentioned twice on the front page of kobol.io), it's not exactly a secret. 3
Werner Posted January 16, 2021 Posted January 16, 2021 Shared lines are not uncommon even on x86 mainboards. For example: https://www.asrock.com/mb/intel/fatal1ty z97 killer/#Specification Take note at Storage and Slots sections: Quote The M.2 Socket, SATAE_4, SATAE_5 and SATA Express share lanes. If either one of them is in use, the others will be disabled. That is to mention as addition to @tommitytom. 3 hours ago, fr33s7yl0r said: MONEY You can try to sell it in forums. I already noticed a few people who did this successfully here. However if I were a potential buyer I'd consider a person with such an attitude not necessarily trustworthy 3
clostro Posted January 16, 2021 Posted January 16, 2021 https://man7.org/linux/man-pages/man7/lvmcache.7.html https://linuxhint.com/configuring-zfs-cache/ Here you go... use 4 HDDs and an SSD cache. Or sell your unit, quite a lot of people wanted to buy one and couldn't in time. OR, Frankenstein your unit and add in port multipliers to original SATA ports. Can add up to 20 HDDs to the 4 original SATA ports, and up to 5 SSDs to the remaining 1 original SATA port. The hardware controller specs say that it supports port multipliers, not sure about Armbian kernel, might have to modify. Btw, you can take a look at the new version of Odroid H2+ with port multipliers (up to 10 SATA disks, + PCI-e Nvme) if you are into more tinkering. Also you get two 2.5G network ports instead of one. Hardkernel team has a blog post about its setup and benchmarks. https://wiki.odroid.com/odroid-h2/application_note/10_sata_drives I am planning to expand my network infra with the H2+ soon. You can even plug a $45 4 port 2.5g switch into the Nvme slot now. I'm going crazy about this unit. If only it didn't have a shintel (r/AyyMD) cpu in. Anyhow- Just doing a bit of research shows that this was not a 'decision' made by Kobol, not exactly. There are just two reliable PCI-e to SATA controllers I could find that support multiple SATA ports(+4), with the limitation of RK3399 which has 4 PCI-e lanes. It would be a different story if RK had 8 lanes, but that is another can of worms that includes cpu arch, form factor, etc. Not gonna open that can while I'm barely qualified to open this one. What we have here in Helios64 is JMB585, and then the other option was Marvel 88SE9215. Marvel only supports 4 ports, while JMB supports 5. There are no controllers that work reliably with 4 PCI-e 2.1 lanes and have more than 5 ports that I could find. There is the quite new Asmedia ASM1166 which actually supports 6 ports, but that was probably not available during design of Helios64 as it is quite new. Not only that, there is a weird thread about its log output on Unraid forums. At the end, this '5 ports only' was not exactly a decision for Kobol. But a rather uninformed decision made by you. You don't see people here complaining about this. You are the second one who even mentioned it, and the only one who is complaining so far with CAPS no less. Which means that the specs were clear to pretty much everyone that this was the case. My suggestion is to replace one of the drives with a Samsung 860 pro, make it a SLOG/L2ARC, or in my case a LVM cache (write back mode, make sure you have a proper UPS, or the battery in the unit is connected properly) and call it a day. SATA port is faster than the 2.5G ethernet or the USB DAS mode anyhow so your cache SSD will mostly perform ok. 2
Werner Posted January 16, 2021 Posted January 16, 2021 12 minutes ago, clostro said: not sure about Armbian kernel, might have to modify. rockchip64 family is built from mainline with those patches applied on top: https://github.com/armbian/build/tree/master/patch/kernel/rockchip64-current Feel free to take a look into. Curious about that too.
clostro Posted January 16, 2021 Posted January 16, 2021 12 minutes ago, Werner said: Feel free to take a look into. Curious about that too. Couldn't find anything in the Helios64 patch or other RK3399 that relates to port multiplication. Not sure I'm looking at this correctly but when I search the repo for 'pmp' I'm seeing a lot of results in config/kernel folder with CONFIG_SATA_PMP=y Extrapolating those search results, I think I can assume that port multiplication is off by default in Armbian, and needs to be enabled for that specific SoC or device. No? But later I found this on https://wiki.kobol.io/helios64/sata/ Quote SATA Controller Features¶ 5 SATA ports, Command-based and FIS-based for Port Multiplier, Compliance with SATA Specification Revision 3.2, AHCI mode and IDE programming interface, Native Command Queue (NCQ), SATA link power saving mode (partial and slumber), SATA plug-in detection capable, Drive power control and staggered spin-up, SATA Partial / Slumber power management state. So apparently port multipliers should be supported on Helios64, or will be supported? Honestly I don't think they would be very useful since the case has to be modified and stuff. I first mentioned it as a joke to begin with. I don't think anyone would go for this as the case itself is a cool piece of design and butchering it would be a shame.
allen--smithee Posted January 16, 2021 Posted January 16, 2021 The interior is 10.5cm wide and 16cm high and 15.5 deep and a commercial ssd disk has a dimension of 10cm x 7cm x 0.6cm with case. Create a new backplane with 4 x Jmb575 5 Sata ports spaced 1cm each you can plug 20 SDD (up to 80 TB = 20x 4TB) on the Helios64 and put your cache on M.2 Port Sata1 , and waiting for new SoC 4-8 line PCIe v.3 (RK3399 have 4 line v2.1) . 1
Werner Posted January 18, 2021 Posted January 18, 2021 On 1/16/2021 at 3:41 PM, allen--smithee said: The interior is 10.5cm wide and 16cm high and 15.5 deep and a commercial ssd disk has a dimension of 10cm x 7cm x 0.6cm with case. Create a new backplane with 4 x Jmb575 5 Sata ports spaced 1cm each you can plug 20 SDD (up to 80 TB = 20x 4TB) on the Helios64 and put your cache on M.2 Port Sata1 , and waiting for new SoC 4-8 line PCIe v.3 (RK3399 have 4 line v2.1) . Might also be worth discovering the idea about an external addon case for further 3,5" harddrives... @gprovost you heard what we want: Go big or go home 1
gprovost Posted January 18, 2021 Posted January 18, 2021 @fr33s7yl0r Updated your title to make it more friendly. If we could have made it 6x SATA port at that time, with one port dedicated to the M.2 slot, you can be sure we would have done it. We could have taken a different approach using a PCIe Switch, but it would have increase significantly the board. Any board design is about finding the best recipe within a budget for a list of use cases with the less tradeoffs possible. The M.2 SATA port on our Helios64 was an addition to support more use cases to the product, even though yes it wasn't perfect that the port has to be shared. 1
gprovost Posted January 18, 2021 Posted January 18, 2021 @Werner Hahaha yes we are working at improving the design ;-) More news soon. 2
allen--smithee Posted January 18, 2021 Posted January 18, 2021 @WernerI only wanted to bounce off @clostro Quote ...Honestly I don't think they would be very useful since the case has to be modified and stuff. I first mentioned it as a joke to begin with. I don't think anyone would go for this as the case itself... but imagine in the near future that Kobol offers its customers the option of choosing the backplane with the bundle or as a replacement part. Divulgation father of 3 children, I can't afford this kind of eccentricity anymore @hardkernel with love.
clostro Posted January 19, 2021 Posted January 19, 2021 @allen--smithee I can see how these options could be useful to others, but in my case, all I needed was the original config. 4 HDD raid5 + 1 SSD writeback cache I would have been forced to get a QNAP if I needed more. But I'll play I would pick none, here is my counter offer- A- Replace the m.2 slot with a PMP enabled eSATA port on the back for a low end disk tower to be attached for bulk/slow storage. There are generic 8 slot (no PMP over 5, those are raid towers but still eSATA connected) towers available right now. Added bonus is more space on PCB for something. OR B- If possible expose a PCI-e slot or two for adding in a SFP+ and a mini SAS card for a fast disk tower and 10gbe network to be connected. This probably would require more ram, more powerful cpu, and more watts, and all that jazz. Not sure if this is possible in ARM arch available to us mere mortals yet. Obviously Amazon and others have their datacenter ARM cpus that has all these and then some. (I just realized after writing this that I explained a xeon QNAP unit... but they sell, sooo maybe good?) They could design and sell these disk towers and the add in cards as well. Currently available SAS and eSata towers are quite drab and blend to look at. Attractive Helios64 design aesthetic would make them look very pretty. Also, this way the original helios64 box that is suited to home use doesn't get bulkier, its design is mostly unchanged so the production troubles that are solved won't be scratched but built upon. I remember from the blog comments of the Kobol guys that the enclosure factory was problematic and caused some delays at the end. Multiple PCBs are not produced for each use case and the user is free to add in options. In my opinion, A) eSATA port on the back and a Kobol designed disk enclosure would be perfect for most. And a good incremental way forward. 3 disk ZFS or RAID5 + 1 SSD for L2ARC/SLOG or dmcache for fast daily work space and 5/8 disks for daily backups / archives / any slow storage needs. @gprovost Thank you for renaming the thread, I was feeling bad for posting at and keeping it active.
allen--smithee Posted January 19, 2021 Posted January 19, 2021 @clostroWe have an antinomic vision I understand your need for Esata for backward compatibility, nevertheless Esata is a specific and dying standard, and there are cables to make the lossless link Esata from a USB Bus. Personally I always preferred a USB3.1 (5/10Gbit) or 3.2 (20Gbit ) on a motherboard, with USB3.2 you could connect 3 cable esata from a hub. Silence is an essential aspect in my technological choices. When I saw the Helios64 with a TdM <15W and its passive radiator, I realized that it wouldn't end up in the attic or in the garage to sucking watts shamefully in secret. The radiator provides passive cooling and fans remain silent even at full load. Kobol pulled off this NAS 5 HDD feat in a sturdy and small enclosure. On a daily basis it does its job in the smallest possible board, it is light, powerful and functional. Helios64 by its pleasantness and its silence have replace my 65W laptop on the desk. I have everything on hand now. This will not be possible with the standards you mentioned (SAS SFP+..) at this price < 300$. SAS drives are expensive to buy and expensive to operate ( one disc consumes as much and more as the board helios64) and require active cooling. I don't think you can compare an RK SoC on a Kobol product that you own your property and an Amazon, Apple or Nvidia solution in which you are a user of the service offered to you. The choice of weapons I already have A but still having 3 SSD in dying laptops I will choose B/C + G/E/F.
gprovost Posted January 21, 2021 Posted January 21, 2021 Maybe you guys could participate to the following thread
Recommended Posts