Making it ping (when it shouldn't): With a bit of routing foo (Part 4)
OpenBSD version: 7.1 (this time for real) and some Ubuntu 22.04
Arch: Any
NSFP: This might be one of the most stupid things i ever did.
« Part 3: Adding IPv6 <> Part 5: NATNAtnat… »
Looking at the previous parts of this mini series, we are leaving a two tiny problems out of scope. One is the reachability of the VPN service by clients. The other one is that we are currently using exactly one IP address per user (v6), and the VPN endpoint’s address for all users. All these things are not really fun. To address this, we will have to do a bit of (source) routing fun. Specifically, we will make packets coming in over the VPN go to another box, where we can also do some additional NAT-ing.
Basic topology
What we basically have is the following topology/configuration, with the default routes of vpn-host
and out-router
going via eth0
:
192.0.2.1/29 198.51.100.2/29
2001:db8::192:0:2:1/64 2001:db8::198:51:100:2/64
+------+ +------+
| eth0 | | eth0 |
+---+--+ +------+
| ^
| | out-router receives routed:
| | 2001:db8:a::/64
| | 2001:db8:b::/64
| | 2001:db8:c::/64
10.0.0.1/24 | | 198.51.100.128/25
2001:db8:a::/64 v |
+-----+ +-------------+ +-----+------+
| wg0 |<----+ vpn-host | | out-router |
+-----+ +------+------+ +------------+
| ^
| |
| |
| |
v |
+------+ +---+--+
| eth1 +-------------------->| eth1 |
+------+ +------+
2001:db8:b::2/64 2001:db8:b::1/64
10.1.0.2/24 10.1.0.1/24
We have our VPN host with our initial VPN endpoint from the previous iterations of this series.
On the VPN host, we have our wg0
interface, to which users connect.
In addition, we have eth1
–which may also be wg1
with another backhaul VPN link to $where_ping_is_more_free
–which connects this host to another router.
We will use this other router to handle the ‘user traffic’ from the VPN, so it does not go out on the same IP address as our VPN endpoint.
Some basic routing
The first thing we have to do is add some routes for the networks used by vpn-host
on out-router
. (You also have to enable IP forwarding on both systems; I guess you know how to do that, otherwise click here; If you are not using IPv6, skip that part, too.)
To let out-router know about 10.0.0.1/24
and 2001:db8:a::/64
on vpn-host
, we issue (on OpenBSD):
out-router # route add -inet 10.0.0.0/24 10.1.0.2
out-router # route add -inet6 2001:db8:a::/64 2001:db8:b::2
On Linux, we would do:
out-router # ip r a 10.0.0.0/24 via 10.1.0.2
out-router # ip -6 r a 2001:db8:a::/64 via 2001:db8:b::2
Some source routing
The crucial step with which we have to deal now is making vpn-host
route all traffic from 10.0.0.0/24
and 2001:db8:a::/64
via eth1
and out-router
.
On linux, this is relatively simple with ip2route.
We first add a new routing table to our system:
vpn-host # echo "101 cgn" > /etc/iproute2/rt_tables.d/cgn.conf
Next, we add default routes via out-router
to these tables:
vpn-host # ip r a default table 101 via 10.1.0.1
vpn-host # ip -6 r a default table 101 via 2001:db8:b::1
With that in place, we only have to add rules for traffic from networks on wg0
to use routing table 101 instead of the default one.
Again, this is rather straight forward:
vpn-host # ip rule add from 10.0.0.0/24 lookup 101 proto static
vpn-host # ip -6 rule add from 2001:db8:a::/64 lookup 101 proto static
Things more statically
Technically, we can also add this configuration to netplan (to persist reboots and all kinds of funny things). The yaml for that would look somewhat like this:
network:
version: 2
ethernets:
eth0:
addresses:
- 192.0.2.1/29
- 2001:db8::192:0:2:1/64
gateway4: 192.0.2.7/29
gateway6: 2001:db8::192:0:2:7/64
match:
macaddress: <redux>
nameservers:
addresses:
- 1.1.1.1
search:
- example.com
set-name: eth0
eth1:
addresses:
- 10.1.0.2/24
- 2001:db8:b::2/64
routes:
- to: 0.0.0.0/0
via: 10.1.0.1
metric: 100
table: 101
- to: ::/0
via: 2001:db8:b::1
metric: 100
table: 101
routing-policy:
- from: 10.0.0.0/24
table: 101
- from: 2001:db8:a::/64
table: 101
match:
macaddress: <redux>
nameservers:
addresses:
- 1.1.1.1
search:
- example.com
set-name: eth1
Note, that gateway4/6
is deprecated, and for some reason the default routes for rtable 101 do not get installed with this example.
(The from
rules work, though…)
So, use netplan at your own risk (or send patches for this article. ;-P)
Intermittent testing
With everything setup as before, we should now be able to connect to the wireguard VPN on vpn-host
.
Having received an IP(v4 and v6) address(es) there, we should now be able to ping 2001:db8:b::1 and 10.1.0.1 from the VPN connected device (given that IP forwarding is correctly enabled on all devices ;-)).
vpn-client $ ping 10.1.0.1
vpn-client $ ping6 2001:db8:b::1
However, some problems persist; We are still stuck with the rather enumerated IP addresses from the previous article for IPv6, and for IPv4, we are currently looking at RFC1918 (private) IPs, which are not routable on the Internet.
Some NAT44 and NAT66 (OpenBSD)
So, let’s do some NAT44 and NAT66. The latter specifically to take care of a bit of anonymity for VPN users.
On OpenBSD (where eth0
likely has another name, like em0
or vio0
) this is done easily with some PF statements in /etc/pf.conf
:
# If you did not get a network routed:
# match out on vio0 from 10.0.0.0/16 to any nat-to 198.51.100.2
# match out on vio0 from 2001:db8:a::/64 to any nat-to 2001:db8::198:51:100:2
# match out on vio0 from 2001:db8:b::/64 to any nat-to 2001:db8::198:51:100:2
match out on vio0 from 10.0.0.0/16 to any nat-to 198.51.100.128/25 random sticky-address
match out on vio0 from 2001:db8:a::/64 to any nat-to 2001:db8:c::/64 random sticky-address
match out on vio0 from 2001:db8:b::/64 to any nat-to 2001:db8:c::/64 random sticky-address
A quick pfctl -f /etc/pf.conf
later, we should be able to reach the Internet with IPv4 and IPv6, receiving a somewhat random v6 address.
The sticky-address
there makes sure that the same in-address keeps the same out-address as long as there is some state flow.
Some NAT44 and NAT66 (Linux)
I am honestly not too good with linux, so there you only get the quick version which should (hopefully) work, but does not do fancy network-block mappings. Technically, if you got a routed IPv6 network, you can also skip the ip6tables
invocation:
out-router # iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
out-router # ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
The full wash
So, now we have our stuff going out of the right(tm) interfaces.
Note that eth
on vpn-host
and out-router
could also just be another wireguard tunnel between your vpn-host
and out-router
, and not a direct connection.
Also, the whole routing stuff also works if you run wireguard without the funny wg-ui
.
To recap, with all this in place, do the following on vpn-host
to get the VPN rolling, after building the newest version of a patched wg-ui
:
vpn-host # ./bin/wireguard-ui --wg-endpoint="192.0.2.1" --wg-dns="192.0.2.53, 2001:db8::53" --wg-allowed-ips="0.0.0.0/0, ::/0" --listen-address="127.0.0.1:8080" --wg-keepalive="15" --auth-user-header="X-Request-ID" --client-ipv4-range="10.0.0.0/24" --client-ipv6-range="2001:db8:a::/64" --wg-listen-port=51280 --v6 --random-port-nat &
vpn-host # ip r a default table 101 via 10.1.0.1
vpn-host # ip -6 r a default table 101 via 2001:db8:b::1
vpn-host # ip rule add from 10.0.0.0/24 lookup 101 proto static
vpn-host # ip -6 rule add from 2001:db8:a::/64 lookup 101 proto static
vpn-host # iptables -t nat -I PREROUTING -i eth0 -d 192.0.2.1/32 -p udp -m multiport --dports 1:51819,51821:65535 -j REDIRECT --to-ports 51820
Disclaimer reminder
To conclude this installment of the series–which leaves you with a kind of working setup–I’d like to remind you of the initial disclaimer: This is stupid, this is dangerous, this let’s other people run traffic through you, and they will most likely route traffic through you that get’s you into trouble. There are occasional leaps in this howto, which everyone technically able to make this judgement call should be able to jump along. If you remain confused, there are most likely better ways for you to help, e.g., by running a snowflake instance or Tor bridge.
Next up
Next up i am going to spend some thoughts on how we can easily move some packets around for this host if the direct path is a little bit broken by people with a dislike of making things ping…