2025–W02 #
After looking at buying a smaller cooler or perhaps a whole new case for my mini-ITX server which only has a 100Mbit/s NIC, I remembered I have a USB-C 1Gbit/s NIC. Problem (sufficiently) solved!
I did some more shuffling around: even with my ThinkPad T440p for sale, I have too many laptops which overlap in functionality. As I (gladly) lost my Windows machine to be the downstairs gaming PC, where it sees more use, I thought putting Windows on my T480 would at least mean I don't have more than one machine that does the same thing (what with Linux being on the liberated Chromebook with the screen removed).
I plan on not trusting the Windows laptop at all, making it perfect for leaving the house without worrying about the compromise of 1Password/SSH/GMail if my laptop is stolen. It just runs whatever Microsoft puts on these things and the DuckDuckGo browser (as a change from my Firefox default). But what if I want to do something privileged while I'm out of the house?
For that, I've set up a privileged VM with a little XFCE environment available over remote desktop over SSH tunnel or VPN. It doesn't sit logged in, so it would still need my 1Password unlock password even if that was compromised, but gives me a remote-desktop-able Firefox instance and somewhere to jump from with my main SSH key.
I'm looking at MPLS over BGP -- I had a reasonably stable thing going with frr and DN42 with just ... routing real packets over WireGuard tunnels. One thing I had working was routing IPv4 when you only have IPv6 next-hops, which is neat. You really just need to know the MAC of the next machine you're sending the packet too -- that MAC can come from the IPv6 ND table as much as the regular IPv4 MAC table. Using MPLS kicks this up a notch: don't even care what MAC addresses or IPs we're transiting, as we're only going to look at labels.
This is complicated enough that it's making me want to set up an actual blog again (or at least a website) to share some of this stuff. These week-notes are really "here's what I did" but it might be nice to have somewhere for "here's how I did it and what it's for".
Speaking of that theoretical blog: I completed a round of benchmarks of storage devices, both with and without a hardware RAID controller as well as with and without a battery-backed write cache. It's hardly ground-breaking or even that scientific but I'd like to write it up somewhere -- if only so I can refer back to it in the future.
Applied for an ARIN organization object for Colocataires. One further step towards the DFZ.
Updated: It was issued! Yay. Next to apply for IP space and an ASN.
I built a little Go utility to post to a bot account on Mastodon/GoToSocial. It'll run from a cron and it was a pretty nice experience. My comfort and speed with Go are improving.
Anyway, I thought why not gold-plate it and add some error logging to a service. This is because in a few places I just panic when seeing a rare error and I'd like to be notified when that happens and it seems like Sentry or similar could do exactly that. In a co-incidence, I was looking at uptrace/bun as a Go ORM thing and I thought "what's uptrace?" (not much what's up with you etc.) -- it's a Sentry-like that looks a bit more minimal but targets all the modern stuff that didn't quite exist when Sentry was started. All the Open* libraries.
I thought it would be cool to implement tracing and error reporting with email logging for my own stuff, perhaps as a pre-cursor for things we build at Colocataires.
Now, trace and error collection are actually hard problems and most of the tools I've found, even the open source ones, were primarily designed to run as a hosted service. Because of that, and because of scale, they have grown a pretty large set of dependencies that something designed to run in a homelab would not necessarily have. Sentry now depends on PostgreSQL, Redis, Kafka (and its dependencies) as well as some OTel stuff and Uptrace also depends on OTel as well as Clickhouse and bundling a test SMTP server because you never know what a user will have at their end. Also Grafana, maybe just for demo purposes as they implement their own Loki and Prometheus compatible backends?
docker compose papers over this stuff and maybe I should just be happy with that and look away but there's definitely a part of me that finds running a whole PostgreSQL somewhere that I will forget about is a worrying thing.