Tag - selfhosted

Entries feed - Comments feed

25 Apr 2025

Gaming Update

I wanted share a quick update on my gaming setup:

I used to have a desktop PC running Fedora Wrokstation which acted as my "Gaming PC," primarily for Cities Skylines II and Palworld.
Within the past month, I've repurposing that computer into a combination NAS/media server/gaming machine. I'll cover the rest of the change in another post, but here I wanted to focus on the gaming aspect real quick.

I installed Debian for an OS, and then installed Steam onto the main SSD. I mounted the same nvme drive which I had used to store my game library previously, and was happy to see that it all worked once I pointed Steam to it's location. I also installed Protontricks, so that I could continue to use Skyve to keep CSII up-to-date (see my previous post on Skyve here). I also created an alias so that I could launch Skyve easily by adding

alias skyve="protontricks --no-bwrap -c 'wine /mnt/nvme/SteamLibrary/steamapps/compatdata/949230/pfx/drive_c/Program\ Files\ \(x86\)/Skyve\ CS-II/Skyve.exe' 949230"` 

to my bashrc profile.

Next, since this PC is intended to act as a server, I set up Sunshine so that I could steam my games to any device. Now I can launch Moonlight from my laptop (or even my cellphone!) and play anything in my Steam library with the full power of my server and GPU!

Skyve running on Debian, as seen from my laptop via Moonlight:

Screenshot of Skyve

The server is also running Jellyfin, several Samba shares, and a handful of other goodies... But more on that all later!

21 Feb 2025

Cloudflare for the Selfhoster

If you selfhost any applications and have wondered how you can best access those applications from outside your network, you’ve undoubtedly come across Cloudflare. Cloudflare offers two services in particular that might be attractive to homelabbers and selfhosters, reverse proxying and “tunnels”.

Both of these offer some degree of benefit; proxying potentially offers a degree of Denial of Service (DoS) 1 protection and use of their Web Application Firewall (WAF), and the tunnel additionally adds the benefit of circumventing the host’s firewall and port-forwarding rules. To sweeten the deal, both of these services are offered “for free”, with the proxying actually being the default option on their free plans.

Lets examine these benefits briefly and see why they’re as popular as they are.

Advantages


DoS Protection

By virtue of being such a massive network, Cloudflare’s proxies is able to absorb a massive number of requests and block those requests from reaching their customer. You can think of it as a dam, when a massive flood of requests come in, Cloudflare is in front of the customer holding back the waters.

WAF

Cloudflare’s WAF allows customers to set firewall rules on the proxy itself, which can block or limit outside requests from ever reaching the customer, similar to the DoS protection.

Bypassing Firewalls (tunnel)

Many firewalls are configured to allow outgoing connections and block incoming connection. This is great for security, since it prevents others from accessing your stuff from the public internet, but this can be a problem if you want people to access your website or services. Cloudflare’s tunnel service takes advantage of the “allow outgoing” rule to establish a connection from the host server to Cloudflare, and then allowing incoming requests to tunnel through this connection and access the host server.

Bypassing Port-Fowarding rules (tunnel)

If you’re hosting a service from behind an IPv4 address, there’s a good chance that you’re using a technology called Network Address Translation (NAT) 2. This allows you to have multiple systems running behind a single public IP address. This concept can be taken further by Carrier Grade (CG) NAT, where the ISP applies NAT rules to their customers. The problem with NAT is that it becomes difficult to make an incoming request to a machine behind NAT. To an outside system, all of the machines behind the NAT layer appears to have a single address. To partially overcome this, a rule needs to be configured that says “when a request comes in for port X, forward it to private address Y, port Z.” This allows, in a limited way, an outside connection to make use of the NAT address and a particular port to send traffic to a specific machine behind the NAT layer. In the case of CGNAT however, these rules would need to be created by the ISP, a service which most do not offer. Similar to taking advantage of the outgoing firewall rules, a tunnel also uses outgoing ports to avoid the need to create specific port forwarding rules and then directly tunnels a list of ports straight to the host machine.

On the surface, this all sounds like a great product; it allows smaller customers to selfhost even if in a sub-optimal environment.



Considerations


Cloudflare’s proxy and tunnel models both have several aspects, however, which might make some want to reconsider their use.

DoS Protection

DoS attacks can vary in size and scope and their impact will depend not only on the volume of the attack, but also on the capabilities of the targeted system. For example, for a small home server, a few hundred connections per second might be enough cause a denial of service. On the extreme opposite, Cloudflare’s servers routinely handle an average of 100,000,000 requests per second without issue. With those numbers in mind, a DoS on a small home server might be so small that it goes unnoticed by Cloudflare’s protections. I do have to say “might” here, though, because I could not find a definitive answer to how Cloudflare determines what a DoS attack looks like. I would expect, however, that the scale of a home server would be so small compared to the majority of Cloudflare’s customer base that an effective DoS attack would not look significantly different than “normal” traffic.
Lastly, and quite importantly, this protection only exists for attacks targeted at a domain name. Domains exist to make the internet easier for humans, for a computer script or bot, an IP address is far simpler. If an attacker targets the hosts IP address directly, that attack will completely bypass whatever protection Cloudflare was providing. If you’re behind CGNAT, this means the attack will target the ISP, but if you have a publicly addressable IP address, that attack is directly targeting your home router.

Security

Cloudflare offers these services for free, and as the saying goes: “If you are not paying for it, you’re not the customer; you’re the product.” Cloudflare advertises that their proxy and tunnel services can optionally provide “end-to-end” (e2e) encryption. In the traditional and commonly used definition, this means that traffic is encrypted by the client (your or your users) and decrypted by the host (your server). Under the normal definition, no intermediary device can decrypt and read the traffic.

Cloudflare, however, uses the term a little differently, as you can see in this graphic.

cloudflare’s “end-to-end” encryption

Instead of providing traditional e2e, Cloudflare acts as a “Man in the Middle” (MITM), receiving the traffic, decrypting it, analyzing it, and then re-encrypting it before sending it along. Cloudflare does this in order to provide their services; they collect and store the unencrypted data in order to apply WAF rules, analyze patterns, and so on.
Now, Cloudflare is a giant company with billions in contracts; contracts they could potentially lose if they were found to be misusing customer data. They wouldn’t benefit by leaking your nude photos from your NextCloud instance or exposing your password for Home Assistant, but you should understand that by giving them the keys to your data, you are placing your trust in them. This MITM position also means that, theoretically, Cloudflare could alter your content without the you (or your users) knowing about it. Normally this would cause a modern browser to display a very large “SOMEONE MIGHT BE TRYING TO STEAL YOUR DATA” warning, but because you are specifically allowing Cloudflare to intercept your data, the browser has no way of confirming whether the content actually came from your server or Cloudflare themselves.
Cloudflare does have a privacy policy, which explains exactly how Cloudflare intends to use your data, and a transparency report, which is intended to show exactly how many times each year Cloudflare has provided customer data to Government entities.

The warning you would normally get during a MITM attack:
SSL Warning

Content Limitations

Lastly, Cloudflare, by default, only proxies or tunnels certain ports. If you want to forward an unusual port through Cloudflare, like 70 or 125, you would need to use a paid account or install additional software. Cloudflare's Terms of Service also limit the type of data you can serve through their free proxies and tunnels, such as prohibiting the streaming of video content on their free plans.

Alternatives


Are there other ways to get similar benefits?

  • If only a limited number of users need access to your server or network and you have a public IP address, the best solution by far is to use a VPN service, such as Wireguard. Wireguard allows you to securely access your network without exposing any attack surface for bots or malicious actors.
  • If you don’t have a public IP, there’s a service called “Tailscale” which uses the same outgoing-connection trick to bypass firewalls and CGNAT, but instead of acting as a MITM privy to all of your traffic, Tailscale simply coordinates the initial connection and then allows the the client and host to establish their own secure, encrypted connection.
  • If you do want to expose a service to the world (like a public website) and you have a public IP address, you can simply forward the needed port/s from your router to the server (typically 443 and possibly 80).
  • If you want your service to be exposed to the world , but are stuck behind CGNAT, then a Virtual Private Server (VPS) is a potential option. These typically have a small cost, but they provide you with a remote server that can act as a public gateway to your main server. They can also provide a degree of DoS protection, since they’re using a much larger network and you can simply turn off the connection between the VPS and your server until things calm down.

Closing thoughts

Cloudflare offers some great benefits, but if you're particularly security-minded, you may want to look into alternatives. Even though Cloudflare is trusted by numerous customers around the world, you still have to decide if you want to trust them with your data. While there are other alternatives, though, Cloudflare's offerings are mostly free and comparatively easy to use.




1. DoS or a "Denial-of-Service (DoS) Attack" is a cyber act in which an attacker floods a server with internet traffic to prevent legitamte users from accessing the services. Cloudflare offers an example of one type of DoS attack here.

  1. NAT translates private IP addresses in an internal network to a public IP address before packets are sent to an external network.

19 Feb 2025

Inspect What You Expect

That's a phrase I heard for the first time when I became a manager; my boss would tell me I needed to "inspect what I expect." To be honest, I don't think I fully understood the phrase until a lot later. In the context of leadership, it means that if you have certain expectations for your team, you should actively verify that those expectations are being met and investigate the cause if they're not. Don't wait until eval season to say "You failed to meet expectations", instead, review expectations regularly, and if someone isn't meeting them, it's your job to figure out why. Have you defined the expectations in a way that they understand? Are your expectations realistic? Did you provide the resources they needed to meet the exceptions? Did you provide guidance when needed? And so on.

You might be asking: "Sure, but why is this on a blog about computers and stuff?"

Expectations


You might have the expectation that your network is secure. You don't remember adding any firewall rules or forwarding any ports, or maybe you followed some YouTube tutorial and they told you to proxy your incoming connections through Cloudflare. Maybe you set up a "deny all" line in that NGINX config or you only access your network through a VPN. You expect that your network is safe from outside actors.

To use a real world example, my father set up several Lorex brand CCTVs on his home network to save video to an NVR. He configured a VPN to allow secure remote access to the NVR and assumed that everything was safe. He didn't create any firewall rules or port forwarding to allow access by anything other than the VPN. He expected that this was safe, and in theory, it should have been.

Inspection


We traveled together for a holiday and while drinking coffee one morning, I showed him Censys. Censys is search engine, but unlike Google or Bing, which search for webpages, Censys searches for the actual devices. It indexes the IP addresses of everything on the internet and everything about them. You can write queries to search this index for almost anything you can think of.

For a quick demonstration, I just searched for his home IP address, and we were both surprised when we saw the results:
not actually his results

Umm... What?

For those who might not know RTSP stands for Real-Time Streaming Protocol. It is a protocol designed to stream video over a network, commonly used for CCTV cameras. The HTTP port was the HTML login screen for one of his cameras. We clicked it and I sarcastically tried a default login (admin, admin) and we found ourselves staring at his basement.
As far as he knew, he took all the appropriate steps to secure the devices on his network and expected that nothing was unsafely exposed, but he hadn't inspected that expectation. We found that the camera was factory configured to expose itself via uPnP, a technology which allows devices to request changes to port-forwarding and firewall rules without user involvement. This is supposed to allow for easy set-up by inexperienced users, but it can also create significantly compromise security without the use knowing about it. In our case, my father is not an inexperienced user, he's been a computer engineer since the 80s and has even worked for one of the major producers of networking equipment. He took all the right steps to get his expected result, he just hadn't inspected it ensure that his expectation was being met.

Inspect What You Expect


Censys can be a great starting point when evaluating your network by helping you to understand what you have exposed to the internet. Censys queries can be as simple as an IP address if you just want to see a single point, or they can search broadly for very specific things.

Here are the results of a very simple query looking for exposed FTP servers based out of japan:
japan

There's a lot of FTP servers, obviously, but did you notice that SSH server on port 10022? Some people expect that services will be hidden if they run them on non-standard ports, but don't inspect to see if that's actually the case. Here, we can see that the SSH server is still quite visible, despite being on a non-standard port, just like those non-standard HTTP servers on the other entries. Clicking into an entry will provide even more information, like the software versions, request responses, and so on.

Through Censys, I realized that I was running an older version of Nginx than I thought I was, and that this older version had a number of vulnerabilities that were patched in later versions. I expected that I was running the a current version, but my inspection showed me otherwise.

Final thoughts


While a tool like Censys isn't the only tool you should use to inspect your security expectations, it's a great starting point, since it can show you what your network looks like to the internet. t's also a fun tool to use to explore the internet from a different angle. Instead of just searching the surface of the web for youtube videos and news stories, try searching deeper for Roombas, smart lights, or security cameras.

The important takeaway, though, is that just because you think that something is working how you expect it to, doesn't always mean that it is.
The only way to know for sure is to inspect what you expect.

17 Feb 2025

Ramblings on my digital life

Are you in control of your digital life, or is it controlling you?
Who decides what you see online?
What information is being collected about you?

Like many, I'm going through a phase of re-evaluating my interaction with social media and to some extent, computer technology as a whole. I'm not entirely "degoogling," like some are trying to do, rather I'm trying to tailor my digital life to me instead of the other way around. I don't want to be fed a "curated" stream content that some algorithm has decided that I'm most likely to "engage" with, but rather content I want contant that I want to see, in a format that I've chosen to see it in.

I want control.



Blogs

One step towards achieving this goal is what you're reading now. This blog. I decided on the software. I built the server that it's running on. I picked the ugly colors (unless you're reading this via gopher). I can delete posts, create new posts, or destroy the blog entirely. There are no trackers, ads, AI, or other garbage unless I decide that there should be. This doesn't get the readership that I would if I were writing this on Facebook, but the few people who do read it are reading it because they want to, and that's what's important.

I've also started following other blogs, too, some of which are listed in the "Links" section in the menu to the left. Reading personal blogs is great, because you're just reading what a person wrote. There's no algorithm, it's just delivering whatever they typed straight into your eyeballs.

Websites

I maintain my personal homepage and I've been visiting other personal websites. While I personally self-host my homepage, sites like Neocities are making it easy for individuals to create personal webpages without needing to manage a webserver themselves. Searching for new personal homepages can be a little difficult compared to massive SEO'd sites, but services like "webrings" can help you find other similar homepages and provide plenty of rabbit holes to travel down. My homepage is part of the "No AI" webring, whose membership consists of personal pages free of AI generated content.

Email

I still use gmail, it's true, but I now interact with it through Thunderbird. I know that doesn't remove all of the Google tracking and whatnot, but it does allow me to access my email in a presentation that I prefer, and it allows me to use PGP/GPG, which does prevent Google from reading my emails, if it's actually used. Unfortunately I don't receive many PGP encrypted emails, but I've published my public key and my domain supports WKD, so it's easy to access if anyone wants to use it. My address is just "me" @ this domain, by the way, if anyone wants to send something my way.


RSS Feeds

Another advantage to Thunderbird is the built-in RSS reader. Instead of visiting, say, reddit.com, I can import my favorite subreddits directly into Thunderbird as RSS feeds, as well as following other blogs, podcasts, forums, and so-on. No ads, no trackers, just the content I actually want to see.


Fediverse

Of course the Fediverse has to play a role. The "fediverse" is a network of social media applications that can all communicate with each other. It's sort of like if Facebook, Reddit, Snapchat, Youtube, ~~Twitter~~ "X", and so on were all interconnected… Except that each service is actually composed of thousands of individually-owned servers instead of a massive server farm owned by a single entity. If I decide to take down my server, my posts will eventually disappear but the fediverse will live on. No single entity controls the fediverse. Owning the server also means that I control the server. I can decide not to federate (interact with) other servers or limit which servers I want to federate with. I'm in control of my own social media.

SearXNG

"SearXNG is a metasearch engine, aggregating the results of other search engines while not storing information about its users." Essentially, instead of directly using Google or Bing, I search via my own SearNGX server which will create a new "identity" for every search, aggregate the results from dozens of other search engines, and then strip out all of the ads and sponsored results. That means I can search for things like "political news" and get results from all across the internet without getting blasted with advertisements for MAGA baseball caps for the next month.

Last thoughts

Breaking away from major social media and massive corporate entities isn't easy (or even totally feasible) I do feel a lot better knowing that I have al little more control over this aspect of my digital life.

31 Jan 2025

Short updates - new keyboard and SSDs

New Keyboard

I picked up a Feker Alice98 in an attempt to get something a little more "ergonomic" than my previous keyboard. It's an "Alice" layout, with the interesting distinction of having two "B" keys.
This keyboard is also compatible with VIA, making keymapping, backlighting, and macros easy to manage. This keyboard isn't listed on VIA's website, though, so it requires manually importing the keyboard definition file (attached to this post). Hopefully VIA will add it officially, eventually.  

"New" SSDs for the cluster

I also picked up some gently-used Intel TLC SSDs for my main Proxmox nodes.

I had noticed before that the IO delay [1] on my nodes would creep in to the 80%+ range when running any reasonably write-intensive task, like a system update. I also received feedback that this blog seemed slow, which I suspected might be caused by the same issue.  While the consumer QLC SSDs I was previously using seem fast for normal desktop use, their short-comings become quite noticeable when running multiple VMs. Here's a screen shot of the IO delay during fairly normal use before, compared to the the IO delay now while running updates:

(blue is the IO delay)

While the old ones would routinely creep in to the 50% or higher range while running fairly simple tasks, the new SSDs peaked around 5%, even when under load.

 

Note(s)

  1. ^ Amount of time that the CPU hangs idle while waiting for IO tasks (reading or writing to a drive) to complete.

18 Jan 2025

Let's Gopher!

I've decided it's time to get my gopher server running again, and wanted to outline the basic steps for anyone else looking to dive into the gopherhole.

install

I personally like gophernicus, a "modern full-featured (and hopefully) secure gopher daemon." Assuming that you're running a Debian server, we can install a simple gopher server via apt:

apt install gophernicus

installing screenshot

This should also install a socket unit and create a service template for gophernicus.

config

Gophernicus defaults to serving from either /var/gopher/ or /srv/gopher. On my install, the default config file is stored at /etc/defaults/gophernicus. I feel like /etc/defaults/ isn't used very often these days, but the intention is to provide a simple way for a developer to provide a default configuration for their application. We can create a new config somewhere reasonable, like /etc/gophernicus or just modify the existing file in /etc/default/ like a lunatic. I'm choosing the latter, of course.

The important thing to add is a hostname, but we can also turn off any features we don't want.

My config looks like OPTIONS=-r /srv/gopher -h gopher.k3can.us -nu.

The -nu option just disables the "personal" gopherspace (serving of ~user directories). Unless you know what this does and intend to use it, I'd suggest disabling it.

testing

We should now be able to start the service and access our empty gopherhole via systemctl start gophernicus.socket, which will create an instance of the gophernicus@.service unit. We can run a quick test by creating a file to serve and viewing the gophermap. To create the test file: touch /srv/gopher/test.txt, and then we can fetch the gophermap via telnet [ip address]-70.

 Trying 192.168.1.5...
Connected to 192.168.1.5.
Escape character is '^]'.

i[/]    TITLE   null.host   1
i       null.host   1
0hello-world.txt                       2025-Jan-18 09:47     0.1 KB /test.txt   gopher.k3can.us 70
.
Connection closed by foreign host.

That little jumble of text is our gohpermap. Lines starting with i indicate "information", while the 0 indicates a link to a text file. Gophernicus creates this map automagically by examining the content of the directory (although it also provides the option of creating a map by hand). To add files and folders, we can simply copy them into the /srv/gopher and gophernicus will update the map to include the new files.

From here, if we want to expose this publicly, can simply route/port-forward through our router.

In my case, though, I'm going to need to configure a few more components before I open it up to the public... First, I use (apparmor)[https://apparmor.net/] to limit application access, and second, my webserver lives behind a reverse proxy.

apparmor

For apparmor, I created a profile for gophernicus:

include <tunables/global>
# AppArmor policy for gophernicus
# by k3can

/usr/sbin/gophernicus {
  include <abstractions/base>
  include <abstractions/hosts_access>
  network inet stream,

  /etc/ld.so.cache r,
  /srv/gopher/ r,
  /srv/gopher/** r,
  /usr/bin/dash mrix,

}

This profile limits which resources gopernicus has access to. While gophernicus should be fairly secure as is, this will prevent it from accessing anything it shouldn't on the off-chance that it somehow becomes compromised. Apparmor is linked above if you want to get into the details, but I'm essentially telling it that gophernicus is allowed to read a number of commonly-needed files, run dash, and access its tcp stream. Gophernicus will then be denied access to anything not explicitly allowed above.

nginx

Lastly, to forward gopher through my reverse proxy, I added this to my nginx configuration:

#Gopher Stream Proxy

 stream {
     upstream gopher {
         server 192.168.1.5:70;
     }

     server {
              listen 70;
              proxy_pass    gopher;
     }
 }

Since nginx is primarily designed to proxy http trafic, we have to use the stream module to forward the raw TCP stream to the upstream (host) server. It's worth noting that as a TCP stream, ngnix isn't performing any virtual host matching, it's simply passing anything that come into port 70 on to the uptream server. This means that while I've defined a specific subdomain for the host, any subdomain will actually work as long as it comes into port 70; gopher://blog.k3can.us, gopher://www.k3can.us, and even gopher://sdkfjhskjghsrkuhfsef.k3can.us should all drop you into the same gopherhole.

access

While telnet will show you the gophermap, the intended way to traverse gopher is through a proper client application. For Linux, Lynx is a command-line based web-browser/gopher client available in most repos, and for Android, DiggieDog is available through the PlayStore.

next steps

Now all that's for me to do is add content. I used to run a script that would download my Mastodon posts and save them to a "phlog" (a gopher blog), and I could potentially mirror this blog to there, as well. That helped me keep the content fresh without needing to manually add files. I haven't quite decided if I want the gopherhole to primarily be a mirror of my other content, or if I want to be more intentional with what I put there.

Besides figuring out content, I'm also curious about parsing my gophernicus logs through Crowdsec. Unsurprisingly, there's not currently a parser available on the Crowdsec Hub, so this might take a little tinkering...

20 Oct 2024

Wavelog

I posted a while back about online amateur radio logging (here) and wanted to do a follow up on some of the client software I've been using to log contacts and sync to both LOTW and QRZ.

I used to use CQRLOG, which is still a great logger, but it is primarily intended to be installed directly on the PC you're logging from. If you want to view QSOs or add a new entry, you need to have access to that PC.
For greater flexibility, there are now self-hosted server-based logging platforms which allow access from any device with a web browser and internet access, such as Wavelog and Cloudlog. I'm going to refer to Wavelog throughout this post, but the features are similar between them both Wavelog and Cloudlog (the former being a fork of the latter). 


Wavelog is a PHP application intended for a LAMP stack, although it actually works on a variety of systems: more on that at the end.

To start, here's a quick screenshot of the QSO entry, on both desktop and mobile:

At a glance, you can see all of the details of the station as soon as you enter the callsign. If the station has a QRZ profile, it will even display their profile photo!

Once you start building your logs, Wavelog can create a number of reports and analytics based on your logbooks, such as:

Or band usage:

It can also track progress towards awards, such as:

 

One of the features that I particularly enjoy is that you can upload QSL cards into your logbook, attached to the QSO entry: 

It can also automatically retrieve eQSL cards, as well!  As I mentioned before, Wavelog can sync entries to LOTW, QRZ, eQSL and more, and will show you the confirmation status for each.

If you happen to use Gridtracker, it can log directly to Wavelog. There is also a small helper application which will allow you to log directly from wsjtx and automatically sync your qso entry page to your radio via FLRig (freq, band, etc).

 

I've been running a Cloudlog server for both myself and my father until about a month or two ago when I switched to Wavelog. The migration from one to the other was fairly straight forward, and the Wavelog wiki provides instructions.  I'm running Wavelog on an Nginx webserver with a MariaDB backend, but there is also a Docker image if you want something simpler.  I did test Cloudlog on a lighttpd server, as well, and encountered no problems.  

So far, I'm happy to report that Wavelog has been quite reliable. The only downtime I've encountered is when I've accidentally broken the webserver itself, and that's certainly not the fault of Wavelog.

If you're looking to try a new logging solution, Wavelog would get a strong recommendation from me.

Check it out at Wavelog.org