malvcr

Members
  • Content count

    21
  • Joined

  • Last visited

About malvcr

  • Rank
    Member

Recent Profile Visitors

48 profile views
  1. mm ... that NanoPi deserves a checking. I have two Orange NAS devices working well with Samsung EVO 250G mSATA devices (I had to replace one Orange Pi Zero because it was extremely slow ... it is in my checking basket to see what happened). Then, I needed another with higher storage capacity, so I added a Seagate 2TB 2.5 inches mobile hard disk attached with the official Orange Pi SATA cable. It seems to work very well ... BUT ... In my configuration I need to attach a Raspberry Pi Zero as an OTG slave to the machine in some particular time. I do the same with the other NAS devices without any issue, but in the moment I connect the Raspberry Pi machine to this particular NAS, the hard disk makes a hard break and some partitions are lost. It seems there are some problem with power supply distribution on this NAS ... I will check what other option I have to avoid this problem. It happens no matter what USB port is used.
  2. Well ... there are different references about how SD cards behave in different scenarios (SD, not SDD). The people that has been worried about that is the one uses Arduinos and other types of microcontroller based devices, as they have more "precise" and "narrow" working environments that computer users ... so they try to squeeze the last micro ampere to rescue some hours of battery or to be able to work will just some solar energy. This is from a NXP community doubt regarding SD card consumption. https://community.nxp.com/thread/422802 It has a lot of details about their particular problem, but the following paragraph is a good abstract (it is talking about computing cycles): This is from an Arduino application. https://forum.arduino.cc/index.php?topic=261021.0 There "somebody" describes an online comparison where the Lexar 8GB SDHC Card consumes "about 25%-30% of the other" ones. I have been trying to find that "online comparison" but the Internet it is very big :-) Then it seems that really there is a difference in the power consumption with SD cards (I suppose that also with the SDD disks, but the power supply for an SDD disk go through a different path that for an SD card, so the differences must not to be so critical as to make the booting to fail). But as others (not me) were able to make the 2G-IOT machine to work with 16GB cards, maybe those cards flash controllers do a better job taking care of power. Or really, the synchronization in the RDA Bios has some flaws avoiding several cards not to communicate correctly with the machine. On the power supply. Yes, I know that good quality cables and good quality power sources are paramount. So it is right to quit the power problem ensuring a quality source of power (I don't say 5A or something like that, ... just the needed power). However, the SD card problem go beyond the external power supply.
  3. My tests: SanDisk Ultra 16GB class 10 - doesn't work. SanDisk Ultra 8GB class 10 - works. SanDisk 4GB class 4 - doesn't work. Lexar 8GB class 10 - works. Radio Shack 16GB class 10 - doesn't work. zador.blood.stained says that this also works on Samsung Evo 16GB Samsung Evo Plus 16GB Somebody else have more experiences to make a bigger "working" chart? As a side note. Some people in other places were indicating that the power supply used with the machine was "the issue". However I have been using the 2G-IOT with 2A, 3A and even connected to the USB 2.0 port in an Orange Pi Zero and works with all them. The problem is the SD card not the power supply. I don't have more data by now to check if the Samsung consume less than the SanDisk or Radio Shack cards.
  4. I put this in the Orange Pi forum ... but just for reference here. The machine doesn't boot properly with 16GB SD cards because of lack of power (it is not because the card could be damaged or something similar). So, 8G class 10 must be used instead (it seems that as larger the card, bigger the consumption). What I don't know is if this is a hardware design limitation or something related with the 1.6 version internal "bios". The only issue with this is that 8G is going out of the market very quickly (local stores around here are bypassing that size for the 16GB) ... so it is necessary to find them online and with shipping and everything they could cost more than the 2G-IOT. As I have no idea about any compatible LCD for the 2G-IOT, I will find a GPIO based one (it is supposed to be Raspberry Pi compatible ... let's see what happens ...).
  5. Well ... with a 100% copper dissipator and Artic Silver 5 thermal compound. bottom soc top type 28 (35%) 43 (16%) 36 humid 26 (35%) 40 (20%) 32 humid Compare with previous values bottom soc top type 32 (43%) 56 (28%) 40 dry 28 (38%) 45 (24%) 34 dry 29 (44%) 52 (29%) 37 humid The distance between the bottom and the soc was reduced. Although also the distance between the soc and the top. This particular one is interesting: bottom soc top type 28 (35%) 43 (16%) 36 humid (WITH COPPER DISSIPATOR and good thermal compound) 29 (44%) 52 (29%) 37 humid (WITH ALUMINUM DISSIPATOR having adhesive thermal tape) They are in very similar conditions, and the raw difference is around 9 celsius degrees. I will finish my production-grade box and re-test. But really this is relevant :-) ... thanks TKaiser for the observation. Later I will share some photographs, because it is important to "invent" something to have the dissipator in place as the Artic Silver 5 compound it is not a glue. P.S. I quit the USB Thumb drive. It is not really necessary for this configuration, neither the SWAP file.
  6. I am waiting for my thermal paste and some copper dissipators (I can't find them in Costa Rica ... this is the problem living here, you need some patience for some types of tasks). But in the while I like to share some things. The first one is about the box and shape orientation. I made a vertical LEGO prototype where I am trying to simulate the chimney principle (holes only bellow and on top). According with this (and the people having chimneys at home can verify it), the cold air outside the top part of the chimney will force some air flow from the hot SOC to the cold air without any type of fan, absorbing cold air from the bottom. This is also what is used in the heavy industry and how tall the chimneys are has a direct relationship about how efficient ovens can work. In the box is the OPIZ+NAS assembly, but the cables are outside. Then, I have two temperature gradient values according with three box places (I am not checking humidity data here ... today it is more humid than other days, and this have a direct impact on the final measures, but the basic principle remains). In all these cases, the Orange Pi Zero has an aluminum dissipator attached with thermal tape. bottom soc top type 32 (43%) 56 (28%) 40 dry 28 (38%) 45 (24%) 34 dry 29 (44%) 52 (29%) 37 humid (% relative to the SOC). As can be seen, there is a maximum temperature that the SOC can be lowered to ... the bottom one. Because if the SOC arrives to that number, the air will cease to flow. And, the goal (with a dissipator and thermal paste) is to reduce the gradient between the bottom and the SOC, but to increase it between the SOC and the top. Then there is the "glue" or the "paste". I really don't like to use epoxyc glue ... it seems to be very permanent (of course it depends on the usage circumstances). So I will use Artic Silver 5, having a very good thermal conductivity (8.9 W/mK). And copper is the best material for the task (well if you can put platinum or gold could be better, but I don't think that the price justify that). From different overclocker forums, the best option is to have a thermal pad. As a reference ... the thermal conductivity for thermal tape is less than 4 W/mK... for a good thermal paste less than 9 and for a good thermal pad is around 17. Just that it is not so easy to attach the pad and to keep it in place (it is not auto-adherent), and I don't know how long it can be used. ( I will stop for a while ... until having the paste and copper dissipators, make a more realistic box (without LEGO), and will share new numbers).
  7. I think that this information is important in the NAS environment, because I am not testing the OPIZ alone but attached to the NAS extension together with an mSATA storage device, a combination that more people than me will try with the OPIZ. OK ... let's see what I did ... First, I tweak the DRAM speed to 132 MHz ... then try one of the iozone tests ... although the machine gave me SSH access, after one hour without any iozone result, I stopped it (Ctrl-C). Then, I worked it at 300 MHz with "normal" results: ionice -c1 iozone -I -a -g 1500m -s 1500m -i 0 -i 1 -r 4K -r 1024K random random kB reclen write rewrite read reread read write 1536000 4 6432 6454 6383 6383 1536000 1024 35951 36170 36640 36632 ionice -c1 iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 random random kB reclen write rewrite read reread read write 102400 4 6353 6396 6357 6371 6367 6435 102400 16 17944 18064 18162 18196 18074 18032 102400 512 33952 34276 35159 35470 35425 34116 102400 1024 34747 34513 35979 36055 35984 34755 102400 16384 37208 37661 37465 37689 37734 37341 There are some differences with your numbers ... could be because I still have the swap on the USB Thumb drive (I am trying not to change many variables together). And this is the corresponding diagram: Before 9:00 I had the fan. Then the machine increase temperature until around 9:30 ... around 11:00 I have a peak at 62 celsius degrees (more or less when I quit the iozone test). As can be seen the temperature was bellow 60 degrees until around 15:00 when I put again the fan and a quick temperature reduction can be observed. The DRAM speed change works but if I reduce it very much, there are some type of instability that even makes the CPU to work harder; as can be seen, after the 62 degrees peak, the same iozone test made the CPU to work, but not so much. I need to check the dissipator paste (right now it is using the factory glue for new dissipators). I had some Artic CPU paste with me but right know I don't know where I put it, so I need to purchase some. When I have it I will repeat the test without the fan.
  8. Just as a reference. I put the machine with the NAS in the bottom part and the Pi on top (it seems to be more reasonable this way), and increased clearance on both sides some millimeters. It reduced temperature around 2 degrees. Then, I added a nice aluminum heatsink on top of the CPU ... and it worked with another 3 degrees when the machine was exposed. And ... remade my LEGO prototype box, with the SSD in the bottom and clear exposed areas at each side of the OPI ... and let the machine working (really doing nothing ... just the rpimonitor on) for one and a half hours around 2:00 p.m. in a tropical country. .... the result. 63 celsius degrees ... very near the security temperature threshold (70). Looking around, I found a bigger (around 4 cm wide) 12V fan ... then I connected it to the SATA electricity connector (it even has the right socket). As this is under-voltage, it runs slowly and makes almost no noise neither vibration, and the fan area is bigger than the 5V tested before. Now, the temperature is 42 celsius degrees ... around 20 degrees less than without the fan. As can be seen on the old data (before 8:00), this temperature is a little lower than the one with the small JET fan. What I see is that the OPI is, by itself, a hot machine. When you add the NAS extension and put everything inside an enclosed box, this doesn't help very much to keep the assembly cold. Another test is to put a much bigger heatsink on the machine, maybe covering all the OPIZ square (this is not a strange solution, the Parallella 16 do that). But by now, it seems that I really need active cooling. Here is the LEGO (and clones) box with the bigger fan. I will check the VDD_CPUX to see how it changes the scenario.
  9. Swapping is the opposite of 'sequential' writing so your benchmark is still wrong :) And both fan and USB thumb drive are the wrong hardware to address the problems you face :) Swap Yeah ... the swap it is not sequentially used. It depends on what memory area are you using. That test only declared that the SanDisk Fit it is slower on writing. iozone -O -i 0 -i 1 -i 2 -e -+n -r 4K -s 2000m random random kB reclen write rewrite read reread read write 2048000 4 1010 0 8057 0 802 438 Compare it with the EVO random random kB reclen write rewrite read reread read write 2048000 4 8635 0 8013 0 1341 2165 The random read it is not so terrible ... the write ... mmmmm .... On my first system test I was using some file management primitives that worked all involved files in memory. This was a clear mistake that used all the computer virtual memory (i.e. swapping) and even crashed on big files. Then I took the memory size into consideration and made my own primitives that work on controlled-size segments, partial hash computing, etc. In this case, the memory usage it is under control, although I use more CPU. However, in my current prototyping version I am using a language that has automatic memory management and I have no 100% guarantee that the memory will be released immediately when I stop using it (this will change in further versions created with C and C++). So ... with a big quantity of small objects (tents of thousands) I still could use more than the base 512MB the OPIZ has in a short period of time. And this can happen, because I am working with source code files and version control systems that make a lot of tiny files from the source code. Also, the system has a web control interface that "also" uses memory. But it works well, just I need to have every detail into consideration. I even have been working this on a Raspberry Pi Zero (WD Node Zero), with an even smaller footprint than the OPIZ+NAS device, including very limited data bandwidth. Do I really need the swap? ... I will check it on heavy usage. Is the USB Thumb the best option? ... is it a good option? ... let me check with care and to put all factors in the balance. If this is not needed I will very very happy to quit it ... but also to quit the swap and to run on only RAM (swap in the EVO disk or the SD card are no options). Fan The fan ... each minute pass I realize this was not a good idea ... - noise, a lot of it - the machine doesn't have enough basic circuitry to control it (working 100% all the time), and if I try to add it, will become more complex and will lost sense. - extra electricity consumption, even when it is not needed - vibration that, eventually, could carry other problems - moving parts in a solid state device (including the storage). I am eating the golden eggs chicken. - more complex housing and wiring But the machine is hot even idling (240 MHz). However I noticed some details I will explore more. - the computer "natural" layout is horizontal, limiting how the air pass through it because by nature, the hot air go up and the NAS is a roof. When I put it vertically, it reduces temperature some degrees because it has a more natural way to cold down (partially using the principle Apple is using in their circular workstation). Together with a passive heatsink correctly oriented on the hot chips. - I will turn off WIFI and other subsystems I am not using (to reduce consumption -- hence temperature -- I understood your point in another thread). - the machine it is very sensitive to external temperature. I will try to keep internal temperature constant with an intelligent case, not just a box. - deactivating any Linux service it is not necessary. Maybe - Limiting highest CPU clock (not to arrive to limits). 1.0 GHz it is still fast enough. In general, to control limit conditions. - Controlling program loops, not to go (while(true){}) cases. Let's make the best with this tiny machine :-) .... And thanks for the guide and doubts ... I am trying to be open minded. If something is wrong, need to be resolved.
  10. I am downloading the OMV software ... let me try it these days. About endurance ratings for USB thumb drives, I remember to read something related but I don't remember where. I have been looking around for more data on the particular stick I am using (SanDisk Cruzer Fit 8G), but I still have no very detailed data. I know that for "bad USB" it is possible to have some tools to query about the flash microcontroller, but I am a little hesitant to run Russian low level software from an unknown source on a Windows machine :-) Without being very exhaustive, however, it is clear that this (USB 2.0 --- my mistake in a previous post) stick doesn't have a stellar performance on the writing phase: iozone -a -g 4000m -s 400m -i 0 -i 1 -r 4K kB reclen write rewrite read reread 409600 4 4898 5015 32269 32664 819200 4 4689 4583 32402 32385 1638400 4 4265 4294 32238 32248 3276800 4 4137 4147 32137 32138 Why this stick in particular? 1) ... why a USB stick? ... it is simple. It works. I have some Raspberry Pi machines with sticks running for around three years by now without any issue. They are Lexar 32GB. And they even have a mysql database there. 2) ... why not the Lexar? ... it is very big. I needed something small. And SanDisk is a very good brand. 3) ... why that model? ... I have been reading a lot of reviews. The main reason for having this was "reliability", not speed (anyway, for a USB 2.0 port I can't ask for so much). For example, the Fit Ultra models with USB 3.0 capacity overheat very often and even can stop operating, so they were automatically discarded from my "menu" of options. The Fit (without the Ultra), seems to be a very good option. 4) ... but it has low write speed. Yes. If you need that speed, this is not a good option. But if what I am looking is for something that can manage a "possible" swapping (that could not happen so often), in some particular conditions, I really don't care about the speed. The important is to make the work. And yes, will be faster on the EVO 850 but not if I am also reading and writing in the EVO at the same time that I am swapping, everything on the same USB bus (30+4 it is faster than 30). And really, if could exist a situation where the swapping will be constant, I prefer to burn a thumb Disk (replaceable in minutes without data lost) and not the SSD device (critical). However, we are learning many things about these small devices. Thanks for so much clues. I will complete and deliver my solution, but I will also monitor it carefully. On the run I will detect real enterprise-level usage conditions to make the appropriate adjustments. And if something is important to share I will write about it. Note: in this moment what I am fighting is with temperature. I have a small 5V fan connected to the SATA power source (where I don't have any SATA disk connected) in a temporary LEGO made box (to simulate enclosing) ... it reduces around 15 celsius degrees but makes a terrible noise is making me crazy.
  11. OK ... I remade some tests you suggested me in some previous post about USB bridges. They are performed with the native 1.2 GHz CPU speed and with 0.96 GHz speed. echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor iozone -a -g 4000m -s 400m -i 0 -i 1 -r 4K kB reclen write rewrite read reread 409600 4 44513 44710 33810 33877 819200 4 39321 39389 33811 33765 1638400 4 37320 36728 33568 33577 3276800 4 36094 35789 33567 33504 iozone -a -g 4000m -s 400m -i 0 -i 1 -r 1024K kB reclen write rewrite read reread 409600 1024 43676 43873 32540 32425 819200 1024 38948 38999 32455 32445 1638400 1024 36959 36812 32293 32122 3276800 1024 35803 35464 32204 32041 iozone -O -i 0 -i 1 -i 2 -e -+n -r 4K -s 2000m random random kB reclen write rewrite read reread read write 2048000 4 8788 0 8441 0 1439 2654 Now ... limiting the CPU speed ... echo 960000 >/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq iozone -a -g 4000m -s 400m -i 0 -i 1 -r 4K kB reclen write rewrite read reread 409600 4 39426 41018 32133 32165 819200 4 38615 38702 32202 32119 1638400 4 36568 36454 32061 31985 3276800 4 35466 35281 31864 31786 iozone -a -g 4000m -s 400m -i 0 -i 1 -r 1024K kB reclen write rewrite read reread 409600 1024 42544 42910 31770 31848 819200 1024 38392 38339 31682 31685 1638400 1024 36314 36223 31606 31590 3276800 1024 35108 34901 31442 31414 root@orangepizero:~/x# iozone -O -i 0 -i 1 -i 2 -e -+n -r 4K -s 2000m random random kB reclen write rewrite read reread read write 2048000 4 8635 0 8013 0 1341 2165 The problem is not about the random write performance with the 850 EVO. The system will run there is an automated secure backup system that will process a lot of encrypting and compacting from several SAMBA, SSH, GIT sources, so it is very probable that the 512MB of RAM on the Orange Pi Zero won't be enough to run and will start swapping. And I prefer not to burn the EVO with the swapping to provide better reliability on the data storage device for several years (this is a security solution that must be trustworthy on the data storage part). Then, reliability on the long run it is paramount, but not so much the speed. If with the time, and because of swapping, the USB flash memory is damaged, it is something very easy to replace ... no permanent data stored there. However, I am not sure why it must be slow as I am not sharing the USB bandwidth as I would do if I put the data "and" the swap file in the same device connected to the same USB port (in this case, the Orange is working these devices with different USB bus 04 vs 03). This is similar to have more than one hard disks in different SATA controllers where a database has access to one disk and the swap is located in a different disk. /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=sunxi-ehci/1p, 480M |__ Port 1: Dev 3, If 0, Class=Mass Storage, Driver=usb-storage, 480M /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=sunxi-ehci/1p, 480M |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 480M And this is the main reason to use an Orange Pi Zero with a NAS extension than to use a Raspberry Pi machine. Yes, the mSATA is fast :-) ... and with an independent Ethernet, this must work very well. I can't test the UAS thing because I need the mainline and there is no stable mainline kernel for the Zero machine (I need to work with stable versions for this particular product). And I will remade the WD Blue Disk tests, but with original Orange Pi cables that are still in the mail.
  12. Well ... I have a msata connected to the NAS device ... and these are the numbers: Now, with a real mSATA. root@orangepizero:~# hdparm -Tt /dev/sda1 /dev/sda1: Timing cached reads: 684 MB in 2.00 seconds = 341.73 MB/sec Timing buffered disk reads: 92 MB in 3.02 seconds = 30.47 MB/sec As can be seen .. it is a little slower than the WD Blue hard disk with my custom cable, but so near that can be a comparable speed. The disk is a V-NAND SSD 850 EVO 250GB mSATA from SAMSUNG (i.e. a good device). Now, the two main reasons this setup is a good one: 1) No spin-up electricity consumption. Although not so detailed checking (I don't have better measuring tools), but with a USB power tester, the machine maximum consumption was around 0.80 Amp from start-up, including the hdparm test. 2) Look at it (picture bellow) ... no SATA cables, extremely compact. About my configuration: This setup has also a SanDisk Cruzer Fit USB 3.0 8G Disk (the short one on one of the USB ports --- the one is not shared with the mSATA device ---). This is for the swap and other temporary stuff, so I don't need to degrade the OS SD card by constant rewriting there. And a 3A power supply. And it is important to add some screws on the mSATA storage card. When you put it, the card remain at 45 degrees from the NAS device plane, so you must pull it down and keep there with "something" ... in this case, two Nylon screws with a nut between the mSATA and the NAS, and a short standoff bellow. I still need to add a 5V fan and the case, and my setup is ready to go production (with my particular information system there).
  13. Hi tkaiser. Good reference on scargill ... What I see, in general, is the following: 1) SBC computers (as the Raspberry or the Orange machines) are usually created to work at 5V and to be used in a myriad of different conditions. And some of them were built with micro USB 5V electricity connectors because of the smartphone already available custom to recharge them with such ports (SBC and smartphones are cousins). 2) BUT, there is a combination of negative issues doing that, in particular because the micro USB 2.0 (the small one) was not created to work with more than 500 mA, and because there are many different types of USB cables around, some of them incapable to deliver more than 500 mA reliably because they were not designed for that. 3) When adding more responsibility to the (Power Supply/micro USB connector/USB cable) trio, and not having a guarantee for success on whatever of the parts, the troubles are at the door. One of them is the possibility for data corruption in the used storage devices, in particular those that depend also on the same power supply through the SBC where they are connected (typical problem for writing on SD cards). Although this was a common issue on the first Raspberry Pi generation, it was worked with some changes on software and hardware, but not definitively eradicated... again, it depends on working conditions and general requirements. SO 1) It is not possible to use any SBC for any solution. They are small and they have limitations that must be correctly understood and addressed. For example, the Raspberry Pi 3 has better WIFI than the Orange Pi Zero, but the Orange Pi Zero works better the USB bandwidth than the Raspberry Pi 3. 2) When needing to use a higher than usual quantity of electricity (for example when connecting a hard disk to the SBC), and there is more than one option for doing that, it is IMPORTANT to use the best option for the work. In the case of the Orange NAS together with an Orange Pi Zero, although the assembly works with the micro USB connection, the barrel in the Orange NAS is the right connector to use because it has been designed to power that set of devices in a more trustworthy way. AND, the right power cable and power supply devices must be used (because there are low power capacity 4.1/1.7mm cables and power supplies around). For example, this power supply with barrel 4.1/1.7mm adaptor will be, for sure, troublesome (http://www.navilock.de/produkte/G_41382/merkmale.html?setLanguage=en). Could be interesting to assemble some type of matrix with what resources are better to resolve this or that requirement. I am still waiting for some devices for testing (they arrived on holidays to my courier company) ... when having more practical data at hand I can work a first version of that. P.S. Yes, the "spin-up" period of time is critical because of the high electricity requirement.
  14. I really didn't use more time on that issue, just updated the OS because I needed to continue with my tests. And this only happened before the updating (who knows, maybe a side effect in something else). The data about your experiences is very useful. About to use a msata device, in my case it is really a useful measure because of the "size" factor. The Orange Pi Zero together with the NAS card and the msata provide a very compact combination without strange external cables around. You know, not always performance is "the" key factor :-) ... oh yes, and the NAS with the OPIZ and an external sata hard disk works well powering it from the OPIZ micro USB port from a 2.5 Amps 5V power supply (designed for a Raspberry Pi 2).
  15. Hi ... I acquired more OPIZ and some NAS (just before they offered the H3 and H5 ... well, next purchase maybe for these machines). When receiving them, I realized the need for the special SATA cable, so I made a small Frankenstein with some old cables I had around (just for primary testing purposes while I have real supported cables). I have an Orange Pi power supply, but attached to a 2E device that I can't turn off right now, so I decided to risk myself a little and to power only the OPIZ (no power to the NAS device). And both work with a Canakit 2.5A microUSB power adaptor. The OPIZ powering the NAS worked, at least turning it the tiny green leds. Next test: To put USB memories on all the USB ports (3 of them) ... and they work well, as 480 Mbps devices. OK, this means, the power from OPIZ to NAS it is enough, at least for that. And then, the big test ... ... a WD Blue 500GB 2.5 inch hard drive connected to the NAS. And ... it works!! Even together with another USB memory on the same machine. Without using the NAS dedicated power connector. NOTICE: Before updating the armbian, the hard disk was working as an 1.1 USB device ... painfully slow ... so, update the operating system before testing this. OK ... how fast is this? (after the update) root@orangepizero:~# hdparm -Tt /dev/sda1 /dev/sda1: Timing cached reads: 686 MB in 2.00 seconds = 342.52 MB/sec Timing buffered disk reads: 74 MB in 3.03 seconds = 24.41 MB/sec root@orangepizero:~# hdparm -Tt /dev/sdb1 /dev/sdb1: Timing cached reads: 684 MB in 2.00 seconds = 342.08 MB/sec Timing buffered disk reads: 94 MB in 3.05 seconds = 30.80 MB/sec root@orangepizero:~# In this case, sda1 is a Lexar 32GB memory stick and sda2 is the WD Blue. Then, the raw speed on the hard disk it is good enough for many purposes. NOTICE: When attaching a USB memory stick on the USB port next to the SATA connector, the SATA disk it is expelled in some way (remains mounted but with i/o errors). This must be the multiplexed ports that was previously indicated in this post. Then, I imagine (still doesn't have one of them to test), that the msata will disable the "other" USB port on the NAS card (or the USB will disable the msata devices). So, be careful when using these ports if there are sata devices connected to the NAS card. A final "initial" test ... to copy with scp from a Mac Mini through ethernet a big file. The average speed it is 10 MB/s this is 80 Mbps (when reducing the ssh and operating system overhead, it is not so bad) . However, it is clear that the network it is a real bottleneck when observing the raw sata speed. In this case, and with a regular OPIZ - H2+,512MB device, the speed it is good enough for not so demanding backups, some media files and other usages that have a good behavior on 100 Mbps. OR ... when the OPIZ perform the tasks on the hard disks, as with databases when the 100 mbps it is not in the accessing equation. When I have a msata at hand I will include more numbers.