malvcr

Members
  • Content Count

    106
  • Joined

  • Last visited

 Content Type 

Forums

Member Map

Store

Crowdfunding

Raffles

Applications

Calendar

Everything posted by malvcr

  1. Thanks for the references ... I am cloning the armbian-alike branch now. Before starting on the developers area I will read a little and to order my mind to define what could be the best approach on this.
  2. I was reading the nand-sata-install script ... it seems very monolithic for me. I will do something: 1) Mininum quantity of changes for it to work on the R2 following Armbian standards. 2) A more modular approach to reduce the quantity of changes, make the process more clear and avoid side effects. The second one will be just a "proposal" ... :-)
  3. OK ... let me check the nand-sata-install first. I was able to install in the emmc, so I have something to compare with. Then the kernelconfig, in particular the crypto stuff ( I need to check Armbian style for not to break what can't be broken ). I don't have experience with the uboot, but let me read a little. This is an important think to learn :-)
  4. I am using frank-w 4.14.71 kernel (I had issues with even 4.14.69). Located in the internal eMMc. Why the R2 and no other machine? I have several Orange, Raspberry, Banana, Parallella ... Arduino ... The R2 it is not a perfect machine for many cases, but it works well in some applications. I am trying to work it focused on some basic elements: 1) Two SATA ports from factory. 2) 2 GB RAM 3) 5 addressable (more than 100 Mbps) networking ports 4) ARMv7 5) .. maybe, the WIFI part .. 6) Crypto enhancements 7) Local eMMc storage It has other "features", but that is my focus. My SATA requirements must be "fast enough". I could complement the shared SATA channel with a USB3 disk. I am not really moving huge quantities of data constantly, mainly some data time to time. Memory is important. SinoVOIP made an R64 machine with only 1GB and I think it was a mistake. You can do many things with those ethernet ports. If I need a router or a switch I can buy a cheap one ... but when having the type of power this machine has behind, these ports acquire new life and they are not mere data access points. ARM it is powerful, power friendly, complete. I have been working with Intel and other platforms for decades, but when having ARM I don't need those monsters. I prefer small well crafted diamonds. WIFI ... it depends if you have the R2 exposed or not. My environments are usually multi-computer ones and not always you need wireless there. The Crypto part it is VITAL when dealing with secure scenarios. The internal storage is a must. I can't depend on SD cards; they are very fragile. Of course, I can't burn the eMMc with swap and other similar things. So ... now the 5 ports can have independent addresses. Great!!! But the crypto, sometimes work, sometimes not. It is slow or fast, it is supported or not. That part continues being very unstable for me. In the while, I am working with AF_ALG and whatever the Kernel can bring me, but I think that this would improve in the future. Can I help with that part? .... let me know what can I do. I have several R2 1.1 and 1.2 machines and one of them into production state with a customer.
  5. Interesting ... I though my reply was not stored or deleted. Anyway ... I was checking with cryptodev and the BPI 4.4.70 some time ago (I have some posts in the BPI forum about R2). It was faster, but what I didn't like about it is that cryptodev it is not accepted as an official kernel module. This is why I was working with AF_ALG. Also, although openssl works, it is a very big piece of code that could hide some "issues" for my security centered work (this already happened with openssl). This is also the reason I am not continuing using BOTAN, a very complete ciphering platform, and I prefer something embedded in the Kernel or my own small and light framework. In fact I am not 100% satisfied with AF_ALG. It is a very artificial method (with a terrible documentation) and it doesn't work very well for small block sizes. I was trying to mimic the openssl "speed" benchmark while encrypting with very big block sizes, but the "encript" openssl option doesn't permit me to work them. In such case, I prefer not to use openssl and to work my own tests with my code. Let me see if I can have a minimum AF_ALG testing and basic ciphering tool to share with clear enough source code to play with.
  6. Yes, this was my first attempt a lot of time ago. However, as you describe, things in R2 were very unstable those days. Also, cryptodev it is not an official part of the Kernel, so I prefer to use AF_ALG. (the only problem with these things is that the documentation it is horrible, inaccurate and incomplete ... you need to catch details from everywhere to understand what to do). And you are right, it is necessary to recompile openssl for cryptodev ... but also to obtain the AF_ALG openssl driver (in openssl) and the module (for the kernel). In general, openssl it is not prepared from factory to work with this type of extensions that are very important for some types of hardware as the R2. In fact, AF_ALG works well, just that the corresponding modules are not included by default (I really don't know why). My current efforts are related with why crytposetup can't use AF_ALG without the BPI kernel (there is something with the drivers). But, although not so fast, LUKS works well on the R2 even without the hardware support.
  7. I am configuring another BPI-R2 machine, and I was checking the benchmarks. For this I am using a 4.14.71 with Ubuntu. The numbers are not better than my previous attempts. But this happens because the standard openssl doesn't have the afalg engine. So, I downloaded the latest openssl source, recompiled (zero modifications) and repeated only one test without and with afalg engine to verify how the R2 it is behaving. Without AFALG apps/openssl speed -elapsed -evp aes-256-cbc You have chosen to measure elapsed time instead of user CPU time. Doing aes-256-cbc for 3s on 16 size blocks: 3708058 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 64 size blocks: 1104719 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 256 size blocks: 292752 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 1024 size blocks: 74300 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 8192 size blocks: 9329 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 16384 size blocks: 4662 aes-256-cbc's in 3.00s OpenSSL 1.1.2-dev xx XXX xxxx built on: Fri Oct 5 05:54:31 2018 UTC options:bn(64,32) rc4(char) des(long) aes(partial) idea(int) blowfish(ptr) compiler: gcc -fPIC -pthread -march=armv7-a -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DNDEBUG The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-256-cbc 19776.31k 23567.34k 24981.50k 25361.07k 25474.39k 25460.74k With AFALG apps/openssl speed -elapsed -evp aes-256-cbc -engine afalg engine "afalg" set. You have chosen to measure elapsed time instead of user CPU time. Doing aes-256-cbc for 3s on 16 size blocks: 129499 aes-256-cbc's in 2.95s Doing aes-256-cbc for 3s on 64 size blocks: 115145 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 256 size blocks: 78540 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 1024 size blocks: 34189 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 8192 size blocks: 5404 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 16384 size blocks: 2756 aes-256-cbc's in 3.00s OpenSSL 1.1.2-dev xx XXX xxxx built on: Fri Oct 5 05:54:31 2018 UTC options:bn(64,32) rc4(char) des(long) aes(partial) idea(int) blowfish(ptr) compiler: gcc -fPIC -pthread -march=armv7-a -Wa,--noexecstack -Wall -O3 -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DNDEBUG The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-256-cbc 702.37k 2456.43k 6702.08k 11669.85k 14756.52k 15051.43k This are numbers without the "elapsed" parameter. Without AFALG type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-256-cbc 19776.55k 23565.55k 24981.25k 25360.04k 25556.85k 25471.66k With AFALG type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-256-cbc 8008.18k 27638.99k 251372.80k 1167974.40k infk 4513792.00k I am not sure ... but it seems that afalg it is not available in the stock Armbian ( I checked this with a supported M2+). With SBC seems important to have this available to use the machines potential.
  8. Perfect ... I updated my M2+ and the zRAM is there ... I will jump to the right topic. Thanks.
  9. Oh ... maybe it is my way of writing English. Excuse me. Right now, without the zRAM, and because of the particular restrictions in the R2 and the requirements of OpenKM, I need to add SWAP. And I have no intentions to use the eMMC for that. So, currently, I am swapping to hard disk and because of the security requirements, I need to encrypt the SWAP partition because (the SWAP will have the sensitive data in the disk soon or later). This is my current situation before knowing about this option. I agree 100% it is not the best setup. So ... the zRAM it is very nice, a much better option. I will use it to increase the R2 capacity on this particular application, and to eliminate the DISK SWAP because ... simply ... it has no sense ( comparing encrypted swap disk with zram ). Will be faster, more secure, and simpler to configure. And, as the Helios4 also have 2GB of RAM, if I use that machine, I also need to use the zRAM. In fact, a machine with less that 2GB of RAM for this type of application it is useless.
  10. Thanks tkaiser .... that zRAM it is extremely important. The overcommit is a real problem, but the security one is bigger one. Be on the R2, the Helios4 or any of these SBC machines, sometimes it is necessary to define SWAP to be able to run certain tasks. Maybe because they are heavy, or in other cases, because you have different users performing several independent tasks. However, no matter how you try to protect sensitive data, the SWAP will have it on disk without any type of protection. Then, the alternative is to encrypt the SWAP, and this have an extra burden because the constant cyphering and de-cyphering process, together with the disk bottleneck. When using RAM, you eliminate the encryption need (when the machine is off, the data disappear), and you also quit the disk latency. And, of course, compression it is less heavy than encryption (the one deserves to be used). So, comparing them must be a win-win scenario. Right now I can't use it because the R2 I am working with it is ready to production. But I need to work the next one, and I will add the btrfs and zRAM. I can't use Armbian in the R2 because it is not yet supported, although I am using it with an BPI-M2+ included inside the appliance (that gives me peace of mind). For the R2 I have an Ubuntu 18.04 and a special 4.14.71 Kernel worked by frank-w (the name in Banana Forum) with some extra definitions to use AF_ALG, oh ... I can even read the temperature :-) ... For not to be very out of the thread theme, I think that the Helios4 must have a natural access to zRAM and all these nice "power" features. In fact, I don't know how to enable it from the Armbian point of view. Could be an extra option in the armbian-config tool?
  11. I have a real case scenario for this. Right now, I am using a BPI-R2 for running OpenKM. Have been difficult, but I have it running very well now (without antivirus support that always kill the machine). It is a very nice document management combination. In fact, OpenKM recommends 4GB on a 64 bit platform (dual core), but the BPI-R2 with 2GB and 32 bit (4 core) CPU works well. It seems the Helios4 could be a variation base when I need to work bigger files, and to have 4GB could help me to work with more concurrent users. There are more options that only work with regular storage.
  12. Results from the thermal script ... /sys/devices/virtual/thermal/thermal_zone0: cpu_thermal 0 /sys/class/thermal/thermal_zone0: cpu_thermal 0 cat: /sys/kernel/debug/tracing/events/thermal/thermal_zone_trip/type: No such file or directory cat: /sys/kernel/debug/tracing/events/thermal/thermal_zone_trip/temp: No such file or directory /sys/kernel/debug/tracing/events/thermal/thermal_zone_trip: mmm ... don't think that what you are looking for is there. There are some already made improvements that I need to check but I am not so sure the temperature control it is one of them. By now, and as I am working in an enclosed case for this "hot" machine, I am using active cooling. But this is for an environment that typically will need it... some day ... this could work and I can improve my setting.
  13. There is something strange with OpenSSL used with the benchmark and the R2 machine. Last year I made some tests with a rebuilt OpenSSL using cryptodev, and these were the numbers (I must clarify, that this was with an 1.1 R2 version): First, with the "standard" OpenSSL distributed with the base Linux: openssl speed -evp aes-256-cbc type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-256-cbc 17362.04k 19427.14k 20057.77k 20213.76k 20258.82k Here are then numbers with the special OpenSSL version (compiled by me): type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-256-cbc 30397.60k 271372.80k 637806.93k 7580876.80k infk 7676723.20k The crypto engine works but not with the standard software. I need to check how you are running OpenSSL and what I can do to it, to see if it is possible to measure the crypto engine in the R2 machine from the benchmark. Right now I am working with ALF_ALG instead of cryptodev for not to deviate from the standard software and to quit me some burdens dealing with cryptographic libraries. It is fast, although I still need to have numbers at hand (a lot of work these days).
  14. R2 1.2 without the neon parameter (and with some services running) ... This machine it is working, together with a BPI-M2+ H2+, inside a plastic box with a big server-grade fan. The benchmark can't check the temperature :-( ... The machine has two toshiba 2GB hard disks on RAID-1 configuration. Memory performance: memcpy: 1502.1 MB/s (0.2%) memset: 3799.8 MB/s 7-zip total scores (3 consecutive runs): 2391,2545,2565 OpenSSL results: type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 27627.19k 31935.34k 33210.62k 33593.34k 33737.39k aes-128-cbc 27584.11k 31872.77k 33287.17k 33596.07k 33712.81k aes-192-cbc 24180.63k 27425.83k 28424.28k 28639.23k 28781.23k aes-192-cbc 24145.71k 27322.43k 28412.07k 28659.37k 28699.31k aes-256-cbc 21727.44k 24301.10k 25082.11k 25248.43k 25346.05k aes-256-cbc 21694.57k 24263.27k 25082.11k 25247.74k 25294.17k Full results uploaded to http://ix.io/1iGw. Please check the log for anomalies (e.g. swapping or throttling happenend) and otherwise share this URL. R2.1.2 with the neon parameter and those services off Memory performance: memcpy: 1504.7 MB/s memset: 3802.0 MB/s 7-zip total scores (3 consecutive runs): 2626,2601,2569 OpenSSL results: type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 27524.84k 31910.27k 33292.54k 33607.00k 33759.23k aes-128-cbc 27591.76k 31838.85k 33295.27k 33685.50k 33783.81k aes-192-cbc 24112.55k 27433.26k 28457.56k 28686.34k 28778.50k aes-192-cbc 24179.14k 27386.11k 28461.23k 28730.37k 28805.80k aes-256-cbc 21641.58k 24290.86k 25089.37k 25263.45k 25335.13k aes-256-cbc 21703.41k 24254.40k 25069.99k 25299.97k 25354.24k Full results uploaded to http://ix.io/1iGV. Please check the log for anomalies (e.g. swapping or throttling happenend) and otherwise share this URL. I that "neon" a parameter? ... I really can't see an important difference between both run. Anyway, you have two sets of numbers to deal with. Other day I can check with the R2 1.1 version ... is that the machine it is not in working order right now :-) ... also, I will do it without RAID-1 and using a SSD disk instead of regular hard disks.
  15. Thanks for the opinion about RAID ... I will check btrfs with care. Really, I neither know why to have the R64. The R2 it is advancing (slowly) but the R64 will just be a noise source as it is based in a different SoC. From my point of view, I would prefer to wait at least another year to invest in the R2 platform before announcing that strange machine. They need to learn from Raspberry people ... they are patient, they squeeze their machines capabilities to the maximum and then they evolve them a little obtaining solid devices. By now I have the RAID1 just as a catastrophic security measure. My system will make verified and versioned backups outside the system. But if I can improve the general reliability with btrfs I will do it. And thinking about it, here is where more cores make a difference. Software RAID and redundant verified file systems need extra CPU work; when you have less cores, as with the R64, other concurrent areas will suffer.
  16. I didn't notice this board until yesterday. There is another reference here: https://www.cnx-software.com/2018/07/27/banana-pi-bpi-r64-router-board-is-powered-by-mediatek-mt7622-supports-802-11ac-wifi-poe/ By now, I have been working in an R2 based information system (not a router), and the machine works well. I even have a Linux-based RAID-1, and I am using the cryptography capacity with ALF_ALG. I am aware that the machine it is not a speed beast and that the RAID have throughput compromises because both SATA ports share the same PCIe line. However, this is good enough for many applications as the one I have. My point here is ... the R2 machine has a 4 core 32 bit CPU with 2GB RAM; then, why to make a 2 core 64 bit based machine but reduce the RAM to half? In fact, in the current SBC world with limited resources, 32 bit based architectures are a better solution than 64 bit ones. In my opinion, to create an application enhanced SBC in 2018 - 2019 needs to take into consideration the 4GB RAM ratio and, 4 or more cores. This can open the usefulness of the machine in many areas. And also must exist some IoT very small based devices with 1GB or less RAM, but not to produce a big bird with small wings. The R64 machine it is very focused on the router application and sequential programming. The R2 it is a better machine for general usage scenarios (R1 was a prototype, not really useful machine). PS. To have only one SATA it is not so big deal. In fact, it is better to add a hardware-based RAID complement as this one (https://www.amazon.com/RAIDON-iR2420-2S-S2-2-5-Inch-RAID-Storage/dp/B0065SMI3S/) .. just an example.
  17. I will be sincere ... after weeks trying to make the DS1961S to work on an SBC and having partial results, I gave up with this. Now I have an almost completely functional prototype working with an Arduino that it is connected to the SBC. What I think is that the logic behind a security device, as the DS1961S it is not a good candidate to work as a driver, neither to use through the OneWire basic Linux interface. It seems that the Linux basic driver it is very unstable for that. As the Arduino it is an almost "naked" device, it is easier to control that OneWire device in a more precise way, having the opportunity to create more complex usages with less effort than trying to do everything directly with the SBC. Maybe, in the future, I could try again to use directly an SBC, as the Orange. Let's see how this thing evolve --- and --- although everything can be done directly with an SBC with Linux, sometimes there are better methods to achieve the expected goals.
  18. My mistake .... the ds1961s it is completely different to the standard eeprom iButton devices because the way the memory it is used, restricted by "secrets". This happens because it is a security enhanced device. What I am doing now it is to have it "naked" and to work with the "rw" raw data access facility and no particular kernel module. Something else it is important. I can't use 4.7K resistors but 2.2K ones, because the device has different electricity needs than the temperature sensors. So, before trying to apply any other 1-Wire device, check carefully the particular information before becoming crazy :-) ... My only definitions, by now are in the /boot/armbEnv.txt file: overlays=w1-gpio param_w1_pin=PA20 : this is not for an Orange but for a Banana Pi, but the idea is the same (just the GPIO are not 100% compatible as described by TKaiser):
  19. Just, in case somebody have the same problem I am working with. This thread name is "Orange Pi PC 1-Wire" ... but really it must be "Temperature sensors on Armbian using 1-Wire". I have been working with Bananas, Oranges, Raspberry and even Arduino devices using 1-Wire iButton devices for authentication procedures. In the Arduino world that part is "almost" ready, but in the Linux world (including Armbian) it is not a complete story. Trying to have a complete scenario to work with, I have been checking the 18b20 case, being a well documented one in Linux. However, in real life, this is very different than the ds1961s iButton I am working with now. For example, the thread indicates that the w1_therm must be loaded, but that driver "only" recognizes the temperature 1-wire devices. For the others, a different driver must be loaded. These are the currently included 1-Wire drivers in the 4.4.18 Kernel: ( w1_ds2405 - "Driver for 1-wire Dallas DS2405 PIO." w1_ds2406 - "w1 family 12 driver for DS2406 2 Pin IO" w1_ds2408 - "w1 family 29 driver for DS2408 8 Pin IO" w1_ds2413 - "w1 family 3a driver for DS2413 2 Pin IO" w1_ds2423 - "w1 family 1d driver for DS2423, 4 counters and 4kb ram" w1_ds2431 - "w1 family 2d driver for DS2431, 1kb EEPROM" w1_ds2433 - "w1 family 23 driver for DS2433, 4kb EEPROM" w1_ds2438 - "1-wire driver for Maxim/Dallas DS2438 Smart Battery Monitor" w1_ds2760 - "1-wire Driver Dallas 2760 battery monitor chip" w1_ds2780 - "1-wire Driver for Maxim/Dallas DS2780 Stand-Alone Fuel Gauge IC" w1_ds2781 - "1-wire Driver for Maxim/Dallas DS2781 Stand-Alone Fuel Gauge IC" w1_ds2805 - "w1 family 0d driver for DS2805, 1kb EEPROM" w1_ds28e04 - "w1 family 1C driver for DS28E04, 4kb EEPROM and PIO" w1_smem - "Driver for 1-wire Dallas network protocol, 64bit memory family." w1_therm - "Driver for 1-wire Dallas network protocol, temperature family." ) On 4.16.rc1 only the w1_ds28e17.c ("w1 family 19 driver for DS28E17, 1-wire to I2C master bridge") is extra included. So ... in my case, no family 33 ( ds2432 - ds1961s) is ready to use, so I would need to choose the most similar one and derive my own kernel driver from that. The ds1961s is an iButton with 1Kb EEPROM and SHA-1 Engine, "maybe" similar to the ds2431 (family 2d in the storage part). For extra information about 1-Wire families, check this : http://owfs.sourceforge.net/family.html And ... why ds1961s? ... Well, it has "advanced" characteristics that help me to develop very difficult to copy hardware key devices, that can't be worked with temperature devices or with the simple ds1990a iButton (family 1 ... neither supported natively in Linux).
  20. mm ... I made another installation ... it is working now. Forget the "wireless-power off" in the wpa_supplicant.conf file. That made the problem. Also, I included some Debian repositories and now I am able to install software (although it is better not to abuse because of the lack of space and the own NAND nature of it). root@DietPi:/etc/apt# cat sources.list deb http://ftp.us.debian.org/debian jessie main contrib non-free deb-src http://ftp.us.debian.org/debian jessie main contrib non-free And now, I am trying to convert this in an Access Point. It is totally configured and it says that it is working, but nobody can see the machine outside (so, obviously it is not working). I will try this for a while to provide my impressions about this possible usage for the machine.
  21. This is the article Link with references (this includes the link for the download) ... https://surfero.blogspot.com.es/2017/09/orange-pi-iot-2g-flashear-memoria-nand.html just in case, the download it is in https://mega.nz/#!BFcmiTpL!29AQt7E1odjNUaFV4JNXN8KnVM2dPSocf77EP8uFnPo I was nuts trying to make this to work and then I checked the boot setup ... oh .. 921600 ... my minicom was 115200. Then I was trying to change the boot to 115200, but as that image it is not "common", was a little risky to do that. Then, I changed the minicom to 921600 pressing the next button in the configuration. Now ... login perfect in the console :-) ... and ... WIFI was working around 10 times faster. But then, I did something and now it is impossible for the WIFI to complete the authentication phase with the Access Point (I have been fighting with this for a while with mixed, but all bad results). Then, I connected an ethernet dongle to the USB port and it works very well with an eth0 static address. In general, what I think is that the serial console overruns (in the thousands) with the only core the machine has caused a terrible side effect in the networking stack. It seems that also it is necessary to add "wireless-power off" as the last like in the wlan0 configuration (interfaces file) ... but right now I can't ensure this is a key factor because my WIFI it is not working well. A possible reason for the failure is that the machine always produce a different MAC address, and to declare a manual one with the "hwaddress ether" command in the interfaces definition makes some type of noise when the machine makes the handshaking with the access point ... again, something to be checked. An extra detail. When flashing the machine it is necessary to press and to "keep pressed" the button until the flash has been finished. And in all the process no SD card it is needed, eliminating the original requirement of having strange and difficult to find 8GB SD cards with low consumption requirements. After the machine it is operational, regular SD cards can be used; I even added a 2GB swap file in a 16GB SanDisk card with no problems (of course this won't be fast, but it will permit to me to run more programs on the little 256MB space). -- OK ... my next test, one of these days, will be to use something more complete than DietPi on the NAND, at least the booting component. It is OK to have the remaining in an SD card if the card it is recognized after booting it. :-)
  22. Hi Surfero75 ... I made another attempt with the machine. With your image (based on DietPi that it is based on Armbian), it is possible to boot from the machine internal storage and then to add a 16GB SD card, and the machine will read it. That it is nice :-) ... However, WIFI continues being an issue. I changed the WPA_Supplicant from GB to US and reduced problemas a little, but continue freezing and restarting. Also, it is difficult to login in a TTL console. I can write the user but after writing the password nothing happens ... although this could be a software more than a hardware problem. And your image it is incomplete (I understand was necessary to erase things to write it in the machine space, but some things are not there even when having references ... maybe DietPi was not a good option to work with; it is better a plain Armbian). But good work ... this brings some hopes on the machine :-)
  23. It is good to know that the R2 it is being taken seriously. This machine has important things and, although there are very good alternatives, it has a market place. I have been sending maybe 100 times by now an 825 megabytes ubuntu iso file from one R2 to another and between an R2 and a Mac Mini machine, testing different types of configurations (in the while I am creating the system I will use the R2 for). Here there have some numbers that could be useful: (AES 256 bit): without /dev/crypto 100% CPU max - real 0m53.189s - user 0m44.210s - sys 0m8.030s with /dev/crypto 75% CPU max - real 0m27.015s - user 0m2.290s - sys 0m17.750s Checking at this test alone, it is clear that with the cryptodev driver active (and with the right openssl compiled for it), the machine it is faster processing. And then it is the top 100% capacity when using only the CPU ... I was trying to figure how to test that remaining 25% ... so, I made a multithread program that received the data (running in a R2), and executed 4 parallel sets of openssl+sending data from the other R2. The "general "throughput" for all the "bundle" gives around 45.8 MB/s. This is much higher than the around 17 MB/s I can have with only one similar session. The issue here is that the final speed can't be calculated only taking into consideration the crypto engine. A final test would need software designed for this, because when I cypher with openssl and then send the file on ethernet, I need to "write" the file to disk and to re-read it, and the DISK is a key factor on the overall transmission speed. An extra write is really heavy here. So, if I like to see a wonderful speed without sacrificing the machine, the disk must speed up. The final numbers for secure transmission of data must involve all the key factors : CRYPTO+DISK+NET+CPU. But ... in general, I think it is good enough for my purposes. When I have a better software platform to test all together (without punishing any of the factors), I will come to show my numbers.
  24. Making some tests ... rsync uses ssh or rsh (that for some reason it is worse) ... openssl with cryptodev it is fast on big files. There is no rcp (was replaced by scp that is part of ssh set). Then ssh it is in many important places in a regular distribution. I still must make more precise tests, but sending a file with ssh was around 3 minutes ... but encrypting it with openssl (aes - 256) took around 37 seconds and transmitting it with a simple perl script around 12 seconds. I know I can do better by saving one disk reading in the processing phase by moving to C or C++. But this shows that it is possible to obtain more from this machine without using the "normal" options.