OpenBSD version: 7.1 (and some Ubuntu 22.04)
Arch:            Any
NSFP:            This might be one of the most stupid things i ever did.

Part 2: Random Ports »

So, i am generally a rather big proponent of making it ping. Sadly, our world is rather riddled with people who have rather different opinions about whether things should ping or not. In such a world, the usual go-to thinggy to get connected is running TOR, for which we all hopefully run some infrastructure, either in the form of bridges, exit nodes, or snowflake instances. However, sometimes you are just looking for a tad more ping than through tor, and end up longing for a VPN, say, to upload videos making sure your voice is being heard. Commercial VPN services are usually a go-to thing here. However, those people with an affinity to make it not-ping tend to be rather quick in making those services not-ping first. What might come in handy now is a VPN service that can just pop up somewhere on, say, any small host which happens to have a public IP address…

The obvious issue there is, of course, user provisioning…

Disclaimer

This. Is. Stupid. Like, seriously stupid.
This blog is called ‘doing stupid things’, but this post blows the bounds of stupid.
This is basically a manual for letting $everyone sent packets via your hosts;
This is roughly on par with running a tor exit node, with the added risk of… well, not even being running a tor exit node. If you are not currently thinking about some of that nice PA space your ISP-y organization has lying around which might ‘accidentally’ end up being used for this service due to some ‘accidentally’ running test & development system which ‘somebody’ carelessly exposed to the Internet… move along.

The Idea

The idea is relatively simple: Have a website to which people can go, where they receive a wireguard configuration file. With that config file, they can go about connecting to said VPN and enjoy the (mostly) unhindered free-flow of packets to the wide & wild Internet. This comes with a couple of challenges:

  • We need $some webinterface which enables what we want to do
  • We ideally don’t want said webinterface to subject people just clicking around to some form of tracking
  • We want a rather sign-up-free provisioning of users, but also don’t want to give every user access to every other user’s config files (and thereby: keys) To get these three wishes fulfilled at once, we are kind of poised to fiddle around with software that is already there…

Interface basics

To get things going, i picked the wg-ui published by Embark Studios. This tool is written in Go, and comes well integrated with wireguard (on Linux only, sadly, due to the used Go packets). Furthermore, it comes with an MIT license attached, which is generally useful for making changes. For further input on how to compile wg-ui from source, please take a look at the official repo.

Getting rid of Google fonts

Google fonts are one of the most nasty things in our modern world. Besides being a privacy nightmare (the scientist in me wants to provide a citation here,… then again… ), they–for some reason–show up in close to every self-hosted OSS tool you can imagine. (Note, btw, CDN hosting fonts has no advantages anymore, as loadtimes due to cached fonts are an information leaking side-channel… so… caching is now done per domains… rendering the idea behind google fonts (in terms of load times)… obsolete.) For this project, i briefly pondered downloading the fonts and serving them locally, for example with a fancy cloud tool taking over most of the work… Then again… this is not a project for major design work… <!-- it is.

Self-hosting VPN Binaries

To be able to use the VPN, users will have to install the VPN software–wireguard in this case–on their devices. While this should always be done via the official channels, the usecase of our software might find people in a place where they can reach the VPN system but (not yet) the Wireguard site. For this purpose, we will also be hosting an Android APK and the Windows x86_64 .msi on our box with changes in the same commit. On our system, these go into /var/www/html/assets/ to be served by nginx.

User (non) signup

wg-ui usually works in conjunction with some external authentication framework, commonly the basic auth features of a reverse proxy running in front of it. The default here is X-Forwarded-User, but that can be changed by supplying --auth-user-header. Hence, we could just add the following lines (i assume you know how to get LE certs, and how to configure nginx in general :-)) to our nginx config, given we made wg-ui listen on localhost with --listen-address="127.0.0.1:8080":

...
        server {
                listen 443 ssl;
                listen [::]:443 ssl;

                ssl_certificate /etc/letsencrypt/live/host.example.com/fullchain.pem;
                ssl_certificate_key /etc/letsencrypt/live/host.example.com/privkey.pem;
                include /etc/letsencrypt/options-ssl-nginx.conf;
                ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
                server_name  host.example.com;
                root /var/www/html;

                location / {
                        auth_basic "auth";
                        auth_basic_user_file /etc/nginx/htpasswd;
                        proxy_pass http://127.0.0.1:8080;
                }
        }
...

We can subsequently populate /etc/nginx/htpasswd with username and password combinations, which we can then share with people.

The problem, though, is that everyone using the same credentials will be able to download everyone else’s config files… including private keys, which er… is not the plan.

So, we need some random component in there. For that, nginx conveniently assigns each individual request a uuid. Next, we can map that to a header forwarded by our reverse proxy using:

...
        map $http_x_request_id $uuid {
                default   "${request_id}";
                ~* "${http_x_request_id}";
        }
        server {
...
                proxy_set_header X-Request-ID $uuid;
                add_header X-Request-ID $uuid;

                location / {
...

If we now start wg-ui with --auth-user-header="X-Request-ID" added, we end up with a new user for every request, regardless of which basic auth user has been used.

However, even though this is rather useful, it stops being usefull rather quickly, when we try to download a config file. As that is a new request, we now get a new user, which in turn is not permitted to access the API endpoint for downloading the config file:

WARN[7237] Unauthorized access                           path=/api/v1/users/0ba69c5bdfbf82371d87b291ce14f1e4/clients/1 user=a1eb36d3cfa042e878e044320a4b636b

So, we need a way to pin the user of a client to the user it had upon its first request.

This can be quickly implemented by adjusting the cookie functionality in wg-ui. Instead of always setting a new cookie, we now check if a cookie exists–and then update the current user to that value, instead of updating the cookie to the current user.

Running the service

With everything in place, we can now compile & launch our go service behind nginx:

./bin/wireguard-ui --wg-endpoint="192.0.2.1:51280" --wg-dns="192.0.2.53" --wg-allowed-ips=0.0.0.0/0 --listen-address="127.0.0.1:8080" --wg-keepalive="15" --auth-user-header="X-Request-ID" --client-ip-range="198.51.100.0/24" --wg-listen-port=51280

With 198.51.100.0/24 routed to our box, we will have a bunch of publicly reachable clients. If, for whatever reason, you do not have a whole /24 in IPv4 addresses sitting around, you can also provide an RFC1918 --client-ip-range, and add --nat-device=$dev --nat to your command line.

Apples and iOranges

Something rather odd you may here from people using your service is that they–for some reason–can not connect, and wireguard tells them that their config file is broken. Inspecting those configuration files, you will see something along the lines of:

<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>

The issue here is that iOS browsers (or at least Brave on iOS, maybe more) have a tendency of ignoring the auth basic credentials a user used for a site when downloading a file. In your nginx logs (which, btw, you should probably turn off for a service like this ;-)), you will see something along the lines of:

$IP - - [24/Sep/2022:11:24:39 +0000] "GET /api/v1/users/[...]/clients/1?format=config HTTP/1.1" 401 188 ...

We can rather quickly fix this by removing authentication via basic auth for that API endpoint. The user will still have to provide the right cookie (which, for some reason, iOS devices do) to get the config file. To not expose the create API path, we can place a regex location block into nginx’ config (and, for good measure, can also exclude the static assets).

...
                add_header X-Request-ID $uuid;

                location /assets {
                        root  /var/www/html/;
                }
                location ~* /api/v1/users/[a-z0-9]*/clients/[0-9]*$ {
                        proxy_pass http://127.0.0.1:8080;
                }
                location / {
...

A quick reload later, also users with an affinity for fruit can download configuration files.

Some security reflections

This whole setup is, of course, utterly unsafe; Everyone who has the basic auth credentials set in nginx (you may also leave those out alltogether… but well… ) will be able to route traffic via your system by getting themselve a VPN config. For good measure (and staying out of–at least–email blocklists) it might be worthwhile to restrict clients from originating mail to MXes directly, i.e., by blocking port tcp/25 outbound. Still, this is pretty much veryveryvery stupid.

Then again, security is also a… difficult… thing. Essentially, everyone who can guess or determin nginx’ uuids (or sufficiently limit their keyspace) can at read all VPN configs on the system (thanks Apple…). This again would enable them to MitM traffic over the VPN, or delete known VPN configurations. To be fair to the fruity supplier of handheld devices, having the endpoint behind basic auth–given that that password has to be semi-public in your user base–is also not that much better. The cookie solution is also… not that great, and certainly lacks some security features(tm) (match to domain, TLS only… ).

Then again, though, VPN configs are pretty much there for ‘grab and go’… and people into non-pinging are probably more than happy to just drop packets to the wireguard endpoint alltogether… Still, this is something that prioritizes moving packets over security… and users should be aware of the caveats (and consider using Tor!). But sometimes… ping is better than non-ping… for example, to download a Tor browser…

Up next

This entry surprisingly became a bit longer than anticipated, and there are still a couple of aspects left around this service. Specifically:

  • Making the wireguard service listen on any port and giving out random ports to users (more spread(tm))
  • Adding IPv6 support to the service (the future is now(tm)).
  • Doing some source routing shenanigans so we are not NAT-ing out of the same interface we are receiving packets on.
  • Discussing funny ways to make the tool accessible via more places on the Internet in case our initial endpoint is non-ping’ed.

Again… looking at the length of this… this will have to wait for a couple more articles… let’s see if i get to that in the coming days.