Jump to content

2rl

Members
  • Posts

    4
  • Joined

  • Last visited

  1. Hi! I've been reading a lot about iptables in last few days but I can't get my head around the configuration that I need. I'm still learning and this is beyond my scope. I've set the Rock64 as an access point connected to a vpn server. I need to access any devices that are connected to my main router in 192.168.10.1. My wlan access point is in 172.24.1.1. eth0 and wlan0 are always tunnelled through tun0 and it must stay that way. Connected from wlan0 I can ping 192.168.10.176 (eth0) but not 192.168.10.1 (Internet router) or anything else outside wlan0 subnet. My aim is to access smb servers, nfs servers and ssh to any ip in this subnet 192.168.10.x from wlan0 while connected to the vpn and viceversa. Is this even possible? This is my ip route output: ip route 0.0.0.0/1 via 10.8.8.1 dev tun0 default via 192.168.10.1 dev br0 10.8.8.0/24 dev tun0 proto kernel scope link src 10.8.8.55 128.0.0.0/1 via 10.8.8.1 dev tun0 169.254.0.0/16 dev wlan0 scope link metric 1000 172.24.1.0/24 dev wlan0 proto kernel scope link src 172.24.1.1 185.44.76.118 via 192.168.10.1 dev br0 192.168.10.0/24 dev br0 proto kernel scope link src 192.168.10.176 output of ip addr: ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN group default qlen 1000 link/ether 9e:0d:db:d2:f9:a1 brd ff:ff:ff:ff:ff:ff 3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 9e:0d:db:d2:f9:a1 brd ff:ff:ff:ff:ff:ff inet 192.168.10.176/24 brd 192.168.10.255 scope global br0 valid_lft forever preferred_lft forever inet6 fd7c:b18a:e451:0:9c0d:dbff:fed2:f9a1/64 scope global mngtmpaddr valid_lft forever preferred_lft forever inet6 fe80::9c0d:dbff:fed2:f9a1/64 scope link valid_lft forever preferred_lft forever 4: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 08:10:7b:15:4a:31 brd ff:ff:ff:ff:ff:ff inet 172.24.1.1/24 brd 172.24.1.255 scope global wlan0 valid_lft forever preferred_lft forever 6: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 100 link/none inet 10.8.8.55/24 brd 10.8.8.255 scope global tun0 valid_lft forever preferred_lft forever and this is my iptables: # Generated by iptables-save v1.6.0 on Thu Sep 20 11:15:24 2018 *raw :PREROUTING ACCEPT [6734:463678] :OUTPUT ACCEPT [6489:2129649] COMMIT # Completed on Thu Sep 20 11:15:24 2018 # Generated by iptables-save v1.6.0 on Thu Sep 20 11:15:24 2018 *mangle :PREROUTING ACCEPT [6734:463678] :INPUT ACCEPT [6730:463225] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [6489:2129649] :POSTROUTING ACCEPT [6571:2139731] COMMIT # Completed on Thu Sep 20 11:15:24 2018 # Generated by iptables-save v1.6.0 on Thu Sep 20 11:15:24 2018 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o tun0 -j MASQUERADE COMMIT # Completed on Thu Sep 20 11:15:24 2018 # Generated by iptables-save v1.6.0 on Thu Sep 20 11:15:24 2018 *filter :INPUT ACCEPT [10:1262] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [16:2170] -A FORWARD -i tun0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i wlan0 -o tun0 -j ACCEPT COMMIT # Completed on Thu Sep 20 11:15:24 2018 This is my /etc/network/interfaces: # Network is managed by Network manager auto lo iface lo inet loopback auto br0 iface br0 inet dhcp bridge-ports eth0 wlan0 And finally my hostapd.conf just to show that I've commented br0 in order to be able to tunnel wlan0 traffic throug the vpn. If I uncomment it I can access the other devices but wlan0 stops being tunneled # # armbian hostapd configuration example # # nl80211 mode # ssid=ARMBIAN interface=wlan0 #bridge=br0 hw_mode=g channel=40 driver=nl80211 logger_syslog=0 logger_syslog_level=0 wmm_enabled=1 wpa=2 preamble=1 wpa_psk=66eb31d2b48d19ba216f2e50c6831ee11be98e2fa3a8075e30b866f4a5ccda27 wpa_passphrase=12345678 wpa_key_mgmt=WPA-PSK wpa_pairwise=TKIP rsn_pairwise=CCMP auth_algs=1 macaddr_acl=0 noscan=1 Many thanks for any help given!
  2. Ok, I'm halfway through setting my vpn router, I still need to change the iptables rules so I can access the local network devices through the armbian access point. I've got some interesting results after enabling the access point and routing all my traffic through tun0. My first surprise was that without implementing any of the jmandawg suggestions about isolating core 2, from my laptop. Connected to the ARMBIAN access point and routing the traffic through tun0, I ran speedtest and I got constant speeds of 72mbit/s which is almost the maximum I can reach from my connection, pings were 28ms, the minimum I reach even without the vpn, directly from my router. I'm very pleased with this performance and I wasn't expecting it. I suppose the fact that the Rock64 only job in this scenario is to encrypt and the run of the tests, web browsing and any other processing is done by my laptop shows the real encrypting potential of the openvpn running in the rock64 without any tweaking. The next step was to test the speedtest from the Rock64 directly, results: 25mbit/s speeds and pings of 40ms. Expected Then I implemented jmandawg suggestions the results from my laptop didn't vary a bit, they were excellent before and stayed the same. The results from the Rock64 directly varied greatly. When I ran the tests with "taskset -c 1 speedtest-cli" the results are 68mbit/s and ping 28ms. Same test, this time only "speedtest-cli" and the speed went down to 23mbit/s or maximum 30mbit/s I don't fully understand why if I've isolated a core for openvpn, I still need to isolate another core to run another program in order to achieve good performance with openvpn. Shouldn't it be enough to have core 2 isolated? Another thing is I didn't have "/etc/systemd/system/openvpn.service" The only other place where I found openvpn.service and where I made the changes is "/etc/systemd/system/multi-user.target.wants/openvpn.service" Could this be affecting anything? I don't run my openvpn through the network manager but by the openvpn command
  3. I've done what you suggested and the result has dramatically improved the speed: from 21 mbit/s to 50-55mbit/s I thought that by having the crypto extensions enabled in ARMv8 the performance of Openvpn would automatically rocket compared to the raspberry pi for instance without need to alter anything. This the output, I've noticed that the ping is 55% higher through connections from the Rock64 than from my laptop, that could be interfering with the speed: WITH VPN AND CORE 2 ISOLATED FOR OPENVPN_____________ Linux 4.4.152-rockchip64 (rock64) 09/15/2018 _aarch64_ (4 CP$ avg-cpu: %user %nice %system %iowait %steal %idle 0.28 0.00 0.25 0.02 0.00 99.45 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn mtdblock0 0.00 0.01 0.00 108 0 mmcblk1 0.39 16.28 0.22 242009 3240 zram0 0.11 0.05 0.38 736 5584 zram1 0.02 0.08 0.00 1196 4 zram2 0.02 0.08 0.00 1196 4 zram3 0.02 0.08 0.00 1196 4 zram4 0.02 0.08 0.00 1196 4 avg-cpu: %user %nice %system %iowait %steal %idle 11.94 0.00 0.70 0.00 0.00 87.36 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn mtdblock0 0.00 0.00 0.00 0 0 ping -c 5 185.44.76.118 PING 185.44.76.118 (185.44.76.118) 56(84) bytes of data. 64 bytes from 185.44.76.118: icmp_seq=1 ttl=49 time=25.5 ms 64 bytes from 185.44.76.118: icmp_seq=2 ttl=49 time=24.6 ms 64 bytes from 185.44.76.118: icmp_seq=3 ttl=49 time=24.7 ms 64 bytes from 185.44.76.118: icmp_seq=4 ttl=49 time=26.1 ms 64 bytes from 185.44.76.118: icmp_seq=5 ttl=49 time=24.9 ms --- 185.44.76.118 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4006ms rtt min/avg/max/mdev = 24.633/25.221/26.146/0.572 ms WITHOUT VPN AND CORE 2 ISOLATED FOR VPN ------------------------Linux 4.4.152-rockchip64 (rock64) 09/15/2018 _aarch64_ (4 CP$ avg-cpu: %user %nice %system %iowait %steal %idle 0.30 0.00 0.27 0.02 0.00 99.42 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn mtdblock0 0.00 0.01 0.00 108 0 mmcblk1 0.39 16.06 0.24 242009 3608 zram0 0.10 0.05 0.37 736 5584 zram1 0.02 0.08 0.00 1196 4 zram2 0.02 0.08 0.00 1196 4 zram3 0.02 0.08 0.00 1196 4 zram4 0.02 0.08 0.00 1196 4 avg-cpu: %user %nice %system %iowait %steal %idle 13.59 0.00 2.14 0.00 0.00 84.27 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn mtdblock0 0.00 0.00 0.00 0 0 ping -c 5 86.158.85.129 PING 86.158.85.129 (86.158.85.129) 56(84) bytes of data. 64 bytes from 86.158.85.129: icmp_seq=1 ttl=63 time=2.46 ms 64 bytes from 86.158.85.129: icmp_seq=2 ttl=63 time=6.16 ms 64 bytes from 86.158.85.129: icmp_seq=3 ttl=63 time=2.82 ms 64 bytes from 86.158.85.129: icmp_seq=4 ttl=63 time=6.07 ms 64 bytes from 86.158.85.129: icmp_seq=5 ttl=63 time=4.89 ms --- 86.158.85.129 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4006ms rtt min/avg/max/mdev = 2.462/4.484/6.167/1.575 ms If there aren't any further improvements that can be implemented to improve the performance of openvpn, I will try to use wireguard, it sounds like the ideal solution for devices like ours. If anyone thinks that the openvpn performance should be better than what we've seen in this post, I'm all ears. Thanks all for your answers
  4. Hi! I'm trying to create a vpn access point with a Rock64. I'm receving a low speeds with openvpn but the openssl tests show that there is crypto acceleration. I'm new to all of this. I assume the vpn speeds would be much higher since crypto extensions are enabled in armbian, that's why I choose the Rock64 and Armbian. This is the openssl test: openssl speed -evp aes-256-cbc Doing aes-256-cbc for 3s on 16 size blocks: 14416568 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 64 size blocks: 10625804 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 256 size blocks: 4907054 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 1024 size blocks: 1594735 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 8192 size blocks: 218375 aes-256-cbc's in 3.00s Doing aes-256-cbc for 3s on 16384 size blocks: 109757 aes-256-cbc's in 3.00s OpenSSL 1.1.0f 25 May 2017 built on: reproducible build, date unspecified options:bn(64,64) rc4(char) des(int) aes(partial) blowfish(ptr) compiler: gcc -DDSO_DLFCN -DHAVE_DLFCN_H -DNDEBUG -DOPENSSL_THREADS -DOPENSSL_NO_STATIC_ENGINE -DOPENSSL_PIC -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DVPAES_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/usr/lib/ssl\"" -DENGINESDIR="\"/usr/lib/aarch64-linux-gnu/engines-1.1\"" The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-256-cbc 76888.36k 226683.82k 418735.27k 544336.21k 596309.33k 599419.56k My internet speed is around 76 MB download and 19 MB upload. The rock64 without a vpn connection is capable of reaching those speeds but with the openvpn connected and using the same cipher aes-256-cbc these are the results: speedtest-cli Retrieving speedtest.net configuration... Testing from Zare (185.44.76.118)... Retrieving speedtest.net server list... Selecting best server based on ping... Hosted by fdcservers.net (London) [0.96 km]: 38.804 ms Testing download speed................................................................................ Download: 20.42 Mbit/s Testing upload speed.................................................................................................... Upload: 17.88 Mbit/s I've been reading posts from other people talking about speeds of 60 or 80 MB/s through openvpn connections. Is 20MB the maximum speed I will achieve with the Rock64? If not, what should I do? As I said, I'm new to this and I don't know how to proceed, my ideas are that perhaps openvpn is not compiled to use the crypto engine but maybe I'm just talking nonsense. I'm using Armbian Stretch with desktop legacy kernel 4.4.y Thank you very much for your help
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines