OpenBSD version: 7.1
Arch:            Any
NSFP:            Maybe?

So, after i shared some opinions on the state of the world, it might be nice to talk a bit more about AS59645 again. And, in fact, it has been well over two months since the last blog post about things happening to my AS. Well… a lot has happened since then… basically, i decided to move my tunnel-based AS a bit more into the direction of, well, hardware.

So, to get back on track, let’s summarize what happened.

Picking some hardware

Due to rather lucky circumstances, a set of harddrives made their way to me earlier this year. Specifically, i found myself to be the proud (new) owner of some 32 (rather) used 3TB SAS drives. Given that this whole operation is somewhat NSFP in nature, these are kind of the ideal foundation to setup a small (self-)hosting environment.

The more interesting question then became what servers i would want to put these disks into. There were three essential requirements:

  • The hardware i buy should be reasonably cheap
  • The hardware i buy should be reasonably energy efficient
  • The hardware i buy should have enterprise management features All this combines, i ultimately settled on a couple of systems with Intel v2 Xeons and a reasonable amount of memory.

The odd thing here is that, effectively, the processors (and systems) are therefore around a decade old. Nevertheless, it was a reasonable choice to go for these systems…

Being a decade old meant that the hardware i was going for would be reasonably cheap (yay). However, at the same time not a lot has happened in the (server) CPU market up until AMD rolled out EPYC recently, which finally saw quick progress in the number of cores and processing capacity per core again. These new systems are naturally not in the ‘cheap’ area (yet ^^). If we then look at the somewhat old systems that are somewhat affordable (v3/v4 Xeons, for example), then the performance delta to v2 Xeons is relatively limited, especially given that used systems still roll for 2x the price of the slightly older variety.

Finding a place to be

When it comes to colocation, there is just a myriad of different options. Options roughly range from renting a single rack unit to renting a full cage, and anything that goes in between.

The general wisdom of colocation, though, is kind of the same regardless of size: Power, space, network, pick two. What this essentially means is that, for a reasonable price, you will only find two of the three in one place:

  • Sufficient(ly) cheap (and reliable) power (and AC) costs/availability
  • Sufficient(ly) cheap price per rack-unit
  • Sufficeint(ly) cheap (for traffic costs) and good (in terms of available bandwidth, available carriers, and latency) Getting all of the above in reasonable abundance usually means that the ‘cheap’ part has to be dropped.

Depending on the location (or your needs) these three parameters might be differently important. If you just got a stack of 4U servers that eat very little power, and want them well connected to the Internet, your choice will most likely be ‘space+network’; If you want to run some energy hungry 1U servers (even though you maybe shouldn’t) that draw a tad more power and mostly do compute, i.e., do not need a lot of bandwidth, you might get some cheap options in places with little space but low energy costs (think: Iceland).

In the end, i managed to find a really nice place for a reasonable price that managed kind of well in all three parameters. This gives me 10U, 2x 7A power (at 230V), and 2x 1 Gigabit; One with a BGP upstream from the DC, one with a VLAN to DECIX-DUS, where i can draw some additional upstream from some friendly ASes, and also do some fun peering. Hence, AS59645 is also a bit better upstreamed than before, with the added benefit of my upstreams being closer than 10ms now. You can now also take a look at how things should be forwarded in a looking glass.

So, what gives?

Having put my hardware into the rack, i now have a rather solid (self) hosting setup that can somewhat reasonably push some packets.

I also have some content for further articles; The virtualization systems do not run OpenBSD (mostly, due to VMM not supporting more than one vcpu. That sadly does not mix overly well with (relatively) high memory and CPU density, and running some somewhat more resource hungry things in VMs). I had some interesting experiences with rather strict interpretations of BCP38, IXP LANs, and PMTUD with a tunnel-y AS. I am now running a RIPE Atlas Anchor (and hopefully soon an NLNOG RING node). And, i started to provide upstream for another AS myself.

Other things to do

Apart from that, i am currently wondering which other things i could run and support things ‘for the good of the Internet’, or self-hosting people in general. So, if i missed a great idea of what i could do, drop me a line at contact@as59645.net.