Tag - networking

Entries feed - Comments feed

3 Mar 2025

Refreshing Wireguard VPN

Quick post:

I've been using Wireguard to connect to my offsite back-up server while a little while now. It works quite well, but I've encountered an issue with one of it's limitations: dynamic IP addresses. The remote network has a dynamic IP address and uses dynamic DNS to provide a resolution. This works perfectly fine when initializing the WG interface, but if the IP changes afterwards, WG does not attempt to re-resolve the domain name. The only way to restore the link is to restart the WG service.
This limitation isn't much of a problem for static IPs or devices that routinely disconnect and re-connect (like a cell phone or laptop), but when trying to run two servers with a 24/7 connection, it poses a problem.

Luckily, I think it's a problem with a simple solution:

#/bin/sh
if ping -c 1 10.10.0.15 > /dev/null 2>&1; then
  echo "success"
else
  echo "failure, restarting interface"
  curl -s \
    --form-string "token=$PUSHOVERTOKEN" \
    --form-string "user=$PUSHOVERUSER" \
    --form-string "message=Lost Connection to remote server - Restarting VPN" \
    https://api.pushover.net/1/messages.json > /dev/null 2>&1
  ifdown wg_remote
  sleep 5
  ifup wg_remote
fi

I created this script which periodically runs on my router via cron. If the router can ping the remote server via it's WG address (10.10.0.15), we assume that the VPN is functioning. If the ping fails, it sends me a quick FYI via Pushover, then brings the VPN interface (wg_remote) down, waits a few seconds, then brings it back up, triggering the domain resolution.

It's simple, but it seems to work.

21 Feb 2025

Cloudflare for the Selfhoster

If you selfhost any applications and have wondered how you can best access those applications from outside your network, you’ve undoubtedly come across Cloudflare. Cloudflare offers two services in particular that might be attractive to homelabbers and selfhosters, reverse proxying and “tunnels”.

Both of these offer some degree of benefit; proxying potentially offers a degree of Denial of Service (DoS) 1 protection and use of their Web Application Firewall (WAF), and the tunnel additionally adds the benefit of circumventing the host’s firewall and port-forwarding rules. To sweeten the deal, both of these services are offered “for free”, with the proxying actually being the default option on their free plans.

Lets examine these benefits briefly and see why they’re as popular as they are.

Advantages


DoS Protection

By virtue of being such a massive network, Cloudflare’s proxies is able to absorb a massive number of requests and block those requests from reaching their customer. You can think of it as a dam, when a massive flood of requests come in, Cloudflare is in front of the customer holding back the waters.

WAF

Cloudflare’s WAF allows customers to set firewall rules on the proxy itself, which can block or limit outside requests from ever reaching the customer, similar to the DoS protection.

Bypassing Firewalls (tunnel)

Many firewalls are configured to allow outgoing connections and block incoming connection. This is great for security, since it prevents others from accessing your stuff from the public internet, but this can be a problem if you want people to access your website or services. Cloudflare’s tunnel service takes advantage of the “allow outgoing” rule to establish a connection from the host server to Cloudflare, and then allowing incoming requests to tunnel through this connection and access the host server.

Bypassing Port-Fowarding rules (tunnel)

If you’re hosting a service from behind an IPv4 address, there’s a good chance that you’re using a technology called Network Address Translation (NAT) 2. This allows you to have multiple systems running behind a single public IP address. This concept can be taken further by Carrier Grade (CG) NAT, where the ISP applies NAT rules to their customers. The problem with NAT is that it becomes difficult to make an incoming request to a machine behind NAT. To an outside system, all of the machines behind the NAT layer appears to have a single address. To partially overcome this, a rule needs to be configured that says “when a request comes in for port X, forward it to private address Y, port Z.” This allows, in a limited way, an outside connection to make use of the NAT address and a particular port to send traffic to a specific machine behind the NAT layer. In the case of CGNAT however, these rules would need to be created by the ISP, a service which most do not offer. Similar to taking advantage of the outgoing firewall rules, a tunnel also uses outgoing ports to avoid the need to create specific port forwarding rules and then directly tunnels a list of ports straight to the host machine.

On the surface, this all sounds like a great product; it allows smaller customers to selfhost even if in a sub-optimal environment.



Considerations


Cloudflare’s proxy and tunnel models both have several aspects, however, which might make some want to reconsider their use.

DoS Protection

DoS attacks can vary in size and scope and their impact will depend not only on the volume of the attack, but also on the capabilities of the targeted system. For example, for a small home server, a few hundred connections per second might be enough cause a denial of service. On the extreme opposite, Cloudflare’s servers routinely handle an average of 100,000,000 requests per second without issue. With those numbers in mind, a DoS on a small home server might be so small that it goes unnoticed by Cloudflare’s protections. I do have to say “might” here, though, because I could not find a definitive answer to how Cloudflare determines what a DoS attack looks like. I would expect, however, that the scale of a home server would be so small compared to the majority of Cloudflare’s customer base that an effective DoS attack would not look significantly different than “normal” traffic.
Lastly, and quite importantly, this protection only exists for attacks targeted at a domain name. Domains exist to make the internet easier for humans, for a computer script or bot, an IP address is far simpler. If an attacker targets the hosts IP address directly, that attack will completely bypass whatever protection Cloudflare was providing. If you’re behind CGNAT, this means the attack will target the ISP, but if you have a publicly addressable IP address, that attack is directly targeting your home router.

Security

Cloudflare offers these services for free, and as the saying goes: “If you are not paying for it, you’re not the customer; you’re the product.” Cloudflare advertises that their proxy and tunnel services can optionally provide “end-to-end” (e2e) encryption. In the traditional and commonly used definition, this means that traffic is encrypted by the client (your or your users) and decrypted by the host (your server). Under the normal definition, no intermediary device can decrypt and read the traffic.

Cloudflare, however, uses the term a little differently, as you can see in this graphic.

cloudflare’s “end-to-end” encryption

Instead of providing traditional e2e, Cloudflare acts as a “Man in the Middle” (MITM), receiving the traffic, decrypting it, analyzing it, and then re-encrypting it before sending it along. Cloudflare does this in order to provide their services; they collect and store the unencrypted data in order to apply WAF rules, analyze patterns, and so on.
Now, Cloudflare is a giant company with billions in contracts; contracts they could potentially lose if they were found to be misusing customer data. They wouldn’t benefit by leaking your nude photos from your NextCloud instance or exposing your password for Home Assistant, but you should understand that by giving them the keys to your data, you are placing your trust in them. This MITM position also means that, theoretically, Cloudflare could alter your content without the you (or your users) knowing about it. Normally this would cause a modern browser to display a very large “SOMEONE MIGHT BE TRYING TO STEAL YOUR DATA” warning, but because you are specifically allowing Cloudflare to intercept your data, the browser has no way of confirming whether the content actually came from your server or Cloudflare themselves.
Cloudflare does have a privacy policy, which explains exactly how Cloudflare intends to use your data, and a transparency report, which is intended to show exactly how many times each year Cloudflare has provided customer data to Government entities.

The warning you would normally get during a MITM attack:
SSL Warning

Content Limitations

Lastly, Cloudflare, by default, only proxies or tunnels certain ports. If you want to forward an unusual port through Cloudflare, like 70 or 125, you would need to use a paid account or install additional software. Cloudflare's Terms of Service also limit the type of data you can serve through their free proxies and tunnels, such as prohibiting the streaming of video content on their free plans.

Alternatives


Are there other ways to get similar benefits?

  • If only a limited number of users need access to your server or network and you have a public IP address, the best solution by far is to use a VPN service, such as Wireguard. Wireguard allows you to securely access your network without exposing any attack surface for bots or malicious actors.
  • If you don’t have a public IP, there’s a service called “Tailscale” which uses the same outgoing-connection trick to bypass firewalls and CGNAT, but instead of acting as a MITM privy to all of your traffic, Tailscale simply coordinates the initial connection and then allows the the client and host to establish their own secure, encrypted connection.
  • If you do want to expose a service to the world (like a public website) and you have a public IP address, you can simply forward the needed port/s from your router to the server (typically 443 and possibly 80).
  • If you want your service to be exposed to the world , but are stuck behind CGNAT, then a Virtual Private Server (VPS) is a potential option. These typically have a small cost, but they provide you with a remote server that can act as a public gateway to your main server. They can also provide a degree of DoS protection, since they’re using a much larger network and you can simply turn off the connection between the VPS and your server until things calm down.

Closing thoughts

Cloudflare offers some great benefits, but if you're particularly security-minded, you may want to look into alternatives. Even though Cloudflare is trusted by numerous customers around the world, you still have to decide if you want to trust them with your data. While there are other alternatives, though, Cloudflare's offerings are mostly free and comparatively easy to use.




1. DoS or a "Denial-of-Service (DoS) Attack" is a cyber act in which an attacker floods a server with internet traffic to prevent legitamte users from accessing the services. Cloudflare offers an example of one type of DoS attack here.

  1. NAT translates private IP addresses in an internal network to a public IP address before packets are sent to an external network.

19 Feb 2025

Inspect What You Expect

That's a phrase I heard for the first time when I became a manager; my boss would tell me I needed to "inspect what I expect." To be honest, I don't think I fully understood the phrase until a lot later. In the context of leadership, it means that if you have certain expectations for your team, you should actively verify that those expectations are being met and investigate the cause if they're not. Don't wait until eval season to say "You failed to meet expectations", instead, review expectations regularly, and if someone isn't meeting them, it's your job to figure out why. Have you defined the expectations in a way that they understand? Are your expectations realistic? Did you provide the resources they needed to meet the exceptions? Did you provide guidance when needed? And so on.

You might be asking: "Sure, but why is this on a blog about computers and stuff?"

Expectations


You might have the expectation that your network is secure. You don't remember adding any firewall rules or forwarding any ports, or maybe you followed some YouTube tutorial and they told you to proxy your incoming connections through Cloudflare. Maybe you set up a "deny all" line in that NGINX config or you only access your network through a VPN. You expect that your network is safe from outside actors.

To use a real world example, my father set up several Lorex brand CCTVs on his home network to save video to an NVR. He configured a VPN to allow secure remote access to the NVR and assumed that everything was safe. He didn't create any firewall rules or port forwarding to allow access by anything other than the VPN. He expected that this was safe, and in theory, it should have been.

Inspection


We traveled together for a holiday and while drinking coffee one morning, I showed him Censys. Censys is search engine, but unlike Google or Bing, which search for webpages, Censys searches for the actual devices. It indexes the IP addresses of everything on the internet and everything about them. You can write queries to search this index for almost anything you can think of.

For a quick demonstration, I just searched for his home IP address, and we were both surprised when we saw the results:
not actually his results

Umm... What?

For those who might not know RTSP stands for Real-Time Streaming Protocol. It is a protocol designed to stream video over a network, commonly used for CCTV cameras. The HTTP port was the HTML login screen for one of his cameras. We clicked it and I sarcastically tried a default login (admin, admin) and we found ourselves staring at his basement.
As far as he knew, he took all the appropriate steps to secure the devices on his network and expected that nothing was unsafely exposed, but he hadn't inspected that expectation. We found that the camera was factory configured to expose itself via uPnP, a technology which allows devices to request changes to port-forwarding and firewall rules without user involvement. This is supposed to allow for easy set-up by inexperienced users, but it can also create significantly compromise security without the use knowing about it. In our case, my father is not an inexperienced user, he's been a computer engineer since the 80s and has even worked for one of the major producers of networking equipment. He took all the right steps to get his expected result, he just hadn't inspected it ensure that his expectation was being met.

Inspect What You Expect


Censys can be a great starting point when evaluating your network by helping you to understand what you have exposed to the internet. Censys queries can be as simple as an IP address if you just want to see a single point, or they can search broadly for very specific things.

Here are the results of a very simple query looking for exposed FTP servers based out of japan:
japan

There's a lot of FTP servers, obviously, but did you notice that SSH server on port 10022? Some people expect that services will be hidden if they run them on non-standard ports, but don't inspect to see if that's actually the case. Here, we can see that the SSH server is still quite visible, despite being on a non-standard port, just like those non-standard HTTP servers on the other entries. Clicking into an entry will provide even more information, like the software versions, request responses, and so on.

Through Censys, I realized that I was running an older version of Nginx than I thought I was, and that this older version had a number of vulnerabilities that were patched in later versions. I expected that I was running the a current version, but my inspection showed me otherwise.

Final thoughts


While a tool like Censys isn't the only tool you should use to inspect your security expectations, it's a great starting point, since it can show you what your network looks like to the internet. t's also a fun tool to use to explore the internet from a different angle. Instead of just searching the surface of the web for youtube videos and news stories, try searching deeper for Roombas, smart lights, or security cameras.

The important takeaway, though, is that just because you think that something is working how you expect it to, doesn't always mean that it is.
The only way to know for sure is to inspect what you expect.

18 Jan 2025

Let's Gopher!

I've decided it's time to get my gopher server running again, and wanted to outline the basic steps for anyone else looking to dive into the gopherhole.

install

I personally like gophernicus, a "modern full-featured (and hopefully) secure gopher daemon." Assuming that you're running a Debian server, we can install a simple gopher server via apt:

apt install gophernicus

installing screenshot

This should also install a socket unit and create a service template for gophernicus.

config

Gophernicus defaults to serving from either /var/gopher/ or /srv/gopher. On my install, the default config file is stored at /etc/defaults/gophernicus. I feel like /etc/defaults/ isn't used very often these days, but the intention is to provide a simple way for a developer to provide a default configuration for their application. We can create a new config somewhere reasonable, like /etc/gophernicus or just modify the existing file in /etc/default/ like a lunatic. I'm choosing the latter, of course.

The important thing to add is a hostname, but we can also turn off any features we don't want.

My config looks like OPTIONS=-r /srv/gopher -h gopher.k3can.us -nu.

The -nu option just disables the "personal" gopherspace (serving of ~user directories). Unless you know what this does and intend to use it, I'd suggest disabling it.

testing

We should now be able to start the service and access our empty gopherhole via systemctl start gophernicus.socket, which will create an instance of the gophernicus@.service unit. We can run a quick test by creating a file to serve and viewing the gophermap. To create the test file: touch /srv/gopher/test.txt, and then we can fetch the gophermap via telnet [ip address]-70.

 Trying 192.168.1.5...
Connected to 192.168.1.5.
Escape character is '^]'.

i[/]    TITLE   null.host   1
i       null.host   1
0hello-world.txt                       2025-Jan-18 09:47     0.1 KB /test.txt   gopher.k3can.us 70
.
Connection closed by foreign host.

That little jumble of text is our gohpermap. Lines starting with i indicate "information", while the 0 indicates a link to a text file. Gophernicus creates this map automagically by examining the content of the directory (although it also provides the option of creating a map by hand). To add files and folders, we can simply copy them into the /srv/gopher and gophernicus will update the map to include the new files.

From here, if we want to expose this publicly, can simply route/port-forward through our router.

In my case, though, I'm going to need to configure a few more components before I open it up to the public... First, I use (apparmor)[https://apparmor.net/] to limit application access, and second, my webserver lives behind a reverse proxy.

apparmor

For apparmor, I created a profile for gophernicus:

include <tunables/global>
# AppArmor policy for gophernicus
# by k3can

/usr/sbin/gophernicus {
  include <abstractions/base>
  include <abstractions/hosts_access>
  network inet stream,

  /etc/ld.so.cache r,
  /srv/gopher/ r,
  /srv/gopher/** r,
  /usr/bin/dash mrix,

}

This profile limits which resources gopernicus has access to. While gophernicus should be fairly secure as is, this will prevent it from accessing anything it shouldn't on the off-chance that it somehow becomes compromised. Apparmor is linked above if you want to get into the details, but I'm essentially telling it that gophernicus is allowed to read a number of commonly-needed files, run dash, and access its tcp stream. Gophernicus will then be denied access to anything not explicitly allowed above.

nginx

Lastly, to forward gopher through my reverse proxy, I added this to my nginx configuration:

#Gopher Stream Proxy

 stream {
     upstream gopher {
         server 192.168.1.5:70;
     }

     server {
              listen 70;
              proxy_pass    gopher;
     }
 }

Since nginx is primarily designed to proxy http trafic, we have to use the stream module to forward the raw TCP stream to the upstream (host) server. It's worth noting that as a TCP stream, ngnix isn't performing any virtual host matching, it's simply passing anything that come into port 70 on to the uptream server. This means that while I've defined a specific subdomain for the host, any subdomain will actually work as long as it comes into port 70; gopher://blog.k3can.us, gopher://www.k3can.us, and even gopher://sdkfjhskjghsrkuhfsef.k3can.us should all drop you into the same gopherhole.

access

While telnet will show you the gophermap, the intended way to traverse gopher is through a proper client application. For Linux, Lynx is a command-line based web-browser/gopher client available in most repos, and for Android, DiggieDog is available through the PlayStore.

next steps

Now all that's for me to do is add content. I used to run a script that would download my Mastodon posts and save them to a "phlog" (a gopher blog), and I could potentially mirror this blog to there, as well. That helped me keep the content fresh without needing to manually add files. I haven't quite decided if I want the gopherhole to primarily be a mirror of my other content, or if I want to be more intentional with what I put there.

Besides figuring out content, I'm also curious about parsing my gophernicus logs through Crowdsec. Unsurprisingly, there's not currently a parser available on the Crowdsec Hub, so this might take a little tinkering...

1 Nov 2024

New Router: BananaPi R3 - Part 3 - Configuration

After being subjected to numerous mean glares from the wife and accusations of "breaking the internet", I think I've got it all configured now...

https://www.k3can.us/garfield.gif

Ironically, the more "exotic" configuration, like the multiple VPN and VLAN interfaces were pretty simple to set up and worked without much fuss. The part that had me pulling my hair and banging my head against the desk was just trying to get the wifi working on 2.4GHz and 5GHz at the same time... Something wireless routers have been doing flawlessly for over a decade. After enough troubleshooting, googling, and experimenting, though, I had a working router.

I installed AdGuardHome for dns and ad blocking, but kept dnsmasq for DHCP and local rDNS/PRT requests. dsnmasq's DHCP options directs all clients to AGH, and AGH fowards any PTR requests it recessives to dsnmasq.

Next, I installed a Crowdsec bouncer and linked it to my Local Crowdsec Engine. Now, when a scenario is triggered, instead of banning the offending address at the server, it will be blocked at the edge router level instead. 

 

Lastly I installed and configured SQM (Smart Queue Management), which controls the flow of traffic through the router to the WAN interface. Without this management, the WAN interface buffer can get "bogged down" with heavy traffic loads and cause other devices to experience high latency or even lose their connection entirely. SQM performs automatic network scheduling, active queue management, traffic shaping, rate limiting, and QoS prioritization.

For a comparison, I used waveform to test latency under load.

Before SQM:

====== RESULTS SUMMARY ====== 	
Bufferbloat Grade	C
	
====== RESULTS SUMMARY ====== 	
Mean Unloaded Latency (ms)	51.34
Increase In Mean Latency During Download Test (ms)	76.01
Increase In Mean During Upload Test (ms)	8.69

After SQM:

====== RESULTS SUMMARY ====== 	
Bufferbloat Grade	A
	
====== RESULTS SUMMARY ====== 	
Mean Unloaded Latency (ms)	38.92
Increase In Mean Latency During Download Test (ms)	12.75
Increase In Mean During Upload Test (ms)	1.5

I have to say, I'm pretty happy with the results!
Going from a grade C with a 76 ms increase to a grade A with only a 12.75ms is a pretty substantial difference. This does increase the load on the CPU, but with the BPI R3s quad core processor, I expect that I'll still have plenty of overhead.

Overall, I think I'm happy with the configuration and the BPI R3 itself.

27 Oct 2024

Troubleshooting a minor network issue.

So, while surfing the web and reading up on how I wanted to configure my new BananaPi R3, I encountered a sudden issue with my network connection:

It seemed like I could reach the internet, but I suddenly lost access to my home network services. I tried to SSH into my webserver and was met with a "Permission denied" error.  Since I had earlier attached an additional USB NIC to connect to the BPI, I thought that perhaps the laptop had gotten the interfaces confused and was no longer tagging packets with the correct VLAN ID for my home network. Most of my servers are configured to refuse any requests that don't come from the management VLAN, so this explanation made sense. After poking around in the network settings, all of the VLAN settings appeared correct, but I did notice that the link was negotiated at 100 mbs, instead of the usual 1000. I tired to reconfigure my network settings, manually setting the link to 1000 mbs, resetting interfaces, changing network priorities, etc.  I then tired the classic "reboot and pray" technique, only to find that my wired network connect was down entirely. I wasn't receiving an IP from the DHCP server and the laptop kept reporting that the interface was UP, then DOWN, then UP again, then DOWN again.

Now I started to think that perhaps the issue was hardware related. My usual NIC is built into the laptop's dock, so I thought I would try power cycling the dock itself. This didn't seem to have any effect, besides screwing up my monitors placement and orientation. My next thought was that there might be an issue with the patch cable. "Fast Ethernet" (100mbs) can theoretically function on a damaged cable, so that might explain the lower link speed, and if the damage is causing an intermittent issue, that could also explain the up/down/up/down behavior.  

Being the smart homelabber that I am, I disconnected both sides of the cable and connected my ethernet cable tester. All 8 wires showed proper continuity, though, suggesting that the cable was fine.  When I plugged the cable back in, however, I noticed that I was suddenly getting the normal 1gbe again, but the link was still going up and down. This lead me to the conclusion that the cable likely was the issue, despite it passing the cable test.  I tried replacing the cable entirely with a different one, and found that I now had a stable, 1gbe connection, an IP addresses, and I could now access my network like usual.

Looking back, I think replacing the cable should have been troubleshooting step 1 or 2.

Also, in retrospect, there were some clues that might have let me fix the issue before this point, if I had only put the pieces together. I had noticed another day that the link speed had dropped to 100mbs, but it seemed to correct itself, so I ignored it instead of investigating. While working, I found that Zoom and other applications had started to report my internet connection as "unstable" and that a "speedtest" showed that my internet bandwidth to be drastically lower than it used to be. I assumed this was due to my ISP just being unreliable, since reduced speeds and entire outages are not unusual where I live. 

In hindsight, I think these were all indications that there was a level 1 issue in my network. In the future, I'll have to remember to not over-think things, and maybe just try the simplest solutions first.

 

 

New Router: BananaPi R3 - Part 2 - Flashing

Part 1 is here.

Now that the router is assembled, the next step is to decide where to flash the firmware. As I mentioned in the last post, this device offers a handful of options. Firmware can be flashed to the NOR, NAND, eMMC, or simply run from the SD card. From what I've read, it's not possible to boot from an m.2 card, though. That option is only for mass storage.

After a bit of reading, my decision was ultimately to install to all four!  Sort of...

Image showing the leads connected the UART connector and the DIP switches
The DIP switches and the leads connecting to the UART

My plan is to install a "clean" OpenWRT image to both the NOR and NAND.  The NAND image will be fully configured into a working image, and then copied to the eMMC. The eMMC will then be the primary boot device going forward.  If there's a problem with the primary image in the future, I would then have a cascading series of recovery images available. At the flip of a switch, I can revert to the known working image in the NAND, and if that fails, then I can fallback to the perfectly clean image in the NOR.

..And I do mean "at the flip of a switch". Due to the way that the bootable storage options are connected, only 2 of the 4 can be accessed at a time. Switching between NOR/NAND and SD/eMMC requires powering off the BPI and toggling a series of 4 dip switches, as seen on the official graphic below:

https://wiki.banana-pi.org/images/thumb/4/4c/BPI-R3-Jumper.png/320x107x320px-BPI-R3-Jumper.png.pagespeed.ic.Qyd9EK01n9.png

Switches A and B determine which device the BPI will attempt to boot from. Switch C sets whether the NOR or NAND will be connected and switch D does  the same for the SD and eMMC. To copy a image from the SD card to the NOR, for example, switch D must be high (1) to access the SD card, and switch C must be low (0) to access the NOR.  Since switches A and B set the boot device independent of which devices are actually connected, it would seem that you could set them to an impossible state to render the device unbootable, like 1110, or 1001. 

To accomplish my desired install state, I had to first write the OpenWRT image to the SD card on a PC, and then insert it into the RPI. With the the switches to 1101, I could write the image from the SD card to the NOR, then flip the switches (with the BPI powered off) to 1111 to copy the image to the NAND. Lastly, I can remove the SD card and reboot with the switches in 1010 to boot from the NAND. Then I'll configure the BPI into my fully configured state. This is the step I'm currently working on. I have it about 80% configured, but will need to actually install it in my network before I can complete and test the remaining 20%.  Once it is installed, tested, and fully configured, I'll  copy the NAND to eMMC, before finally setting the switches to 0110 and booting from the eMMC for ongoing use.

Unfortunately, I haven't had a good opportunity to take my network offline to install the new router, so the last bit of configuration might need to wait a little while...