K3CAN

Accueil | Tags | Archives

12 Jan 2025

Wireguard - Securely connecting to your network

Background

After setting up my home server, I found that I wanted a way to securely access and manage my system when I was away from home. I could have opened the SSH port to the internet, but the idea of having such a well-known port exposed to the internet made me wary not only of attempts to gain unauthorized access, but even just simple DOS attacks. I decided the far better option was to use a VPN, specifically, Wireguard.

Wireguard is a simple, lightweight VPN, with clients for Linux, Android, Windows, and a handful of other operating systems. In addition to the security offered by its encrypted VPN tunnel, Wireguard is also a "quiet" service; that is, the application will entirely ignore all attempts to communicate with it unless the correct encryption key is used. This means that common intelligence gathering tactics, such as port scans, won't reveal the existence of the VPN. From the perspective of an attacker, the port being used by Wireguard appears to be closed. This can help prevent attacks, including DOS, because an attacker simply won't know that there's anything there to attack.

Now, when I set about trying to install and configure Wireguard, I found that many of the online guides were either overly complex, or so incredibly simple that they only provided a list of commands to run without ever explaining what those commands did. The overly complex explanation turned out to be too confusing for me, but on the other hand, I'm also not someone to run random commands on my system without understanding what they do. I did eventually figure it out, though, so I thought I'd try writing my own explanation to hopefully offer others a middle ground.

Introduction

Wireguard uses public key encryption to create a secure tunnel between two "peer" machines at the network layer. It's free, open source software, and is designed to be simple to use while still providing a minimum attack surface.

In essence, wireguard creates a new ipv4 network over an existing network. On a device running wireguard, the wireguard network simply appears to be an additional network interface with a new IP address. On a typical laptop, for example, one might have several network interfaces, like lo, eth0 and wlan0, with an additional interface for wireguard appearing as wg0.

Installation

Installing Wireguard is typically done through your package manager, i.e. apt, dnf, pacman, etc, or optionally installed via a container package, via podman, flatpack, snap, appimage, or docker.

For example, apt install wireguard-tools. Depending on your system setup, you may need to preface this and some of the other commands in this walkthrough with the word "sudo".

Configuration

Once installed, wireguard will need to be configured. Wireguard considers devices to be peers, rather than using a client/server relationship. That means that there is no "primary" device that others connect to, rather, they can all connect to each other, just like a typical LAN. That said, for the sake of explanation, it is sometimes easier understand if we use the server/client labels, despite the devices not technically functioning in that relationship. In other words, I'm going to say "server" and "client" because I find it less confusing than saying "peer A" and "peer B".

We'll start on the "server" (aka. Peer A, not technically a server, yada yada) by running wg genkey | tee private.key | wg pubkey > public.key.
wg genkey produces a random "Private key", which will be used by the peer to decrypt the packets it receives. tee is a basic linux command that sends the output of one command to two destinations (like a T shaped pipe). In this case, it is writing the private key to a file called "private.key" and also sending the same data to the next command wg pubkey. This command creates a "public key" based on the "private key" it was given. Lastly, > is writing the output of wg pubkey to the file public.key. After running that command, we now have two files: private.key and public.key.

We can view the actual key by running cat private.key or cat public.key.

We'll repeat those same steps on the "client".

Next, let's decide what IP network address we want to use for the wireguard network. Remember that this needs to be different from your normal local network. There are several different network ranges available, but the most common is 192.168.1.0/24. Those first three "octets", 192, 168, and 1 define the network in this IP address, while the last one defines the device's address within that network. So, what's important here is that if your primary network is 192.168.1.0/24, that we assign the wireguard network a different address, like 192.168.2.0/24. By changing that 1 to a 2, we're addressing an entirely different network.

Now, we'll create the configuration files themselves. On the server, we'll create a configuration file by using a text editor, such as nano. nano /etc/wireguard/wg0.conf will create a file called wg0.conf and open it for editing. The name of this configuration will be the name of the network interface. It doesn't need to be wg0, but that's a simple, easy to remember option.

Here is an example configuration file for the server:

[Interface]
PrivateKey = (Your server's private key here)
Address = 192.168.2.10/24  
ListenPort = 51820

[Peer]
PublicKey = (Client's public key here)
AllowedIPs = 192.168.2.0/24
Endpoint = (see below)

PrivateKey is one that we generated on this server.

Address is the IP address that we're assigning to this server on the wireguard interface. This is our wireguard network address.

ListenPort is the "port" that wireguard is going to "listen" to. If you're not familiar with ports, think of it this way: an IP address is used to address packets to a specific device, and a port number is how you address those packets to a specific application on that device. In this case, Wireguard will be "listening" for packets that come in addressed to port number 51820. That number is somewhat arbitrary, but for uniformity, certain applications tend to use certain port numbers and 51820 is typical for wireguard.

The [Peer] section defines the other system/s on the network, in this case, our "client" system.

PublicKey is the public key created on the client device.

AllowedIPs tells wireguard which IP address it should forward to this peer. Here, we're telling it that any packets addressed to the 192.168.2.0/24 network should be forwarded out to this peer (our client system).

Endpoint is how we reach this peer. Because wireguard creates a network on top of an existing network, we need to tell wireguard how to get to the peer on the existing network before it can use the wireguard network. If you're connecting two devices on your LAN, we would just enter the normal IP address of the peer, such as 192.168.1.123. If you're connecting to another device over the internet, you will need to have a way to address packets to that device, such as a public IP address or a domain name. This will be specific to your network situation. Following the IP or domain name is a colon and the port number. For this example we'll assume the other machine is on your LAN at address 192.168.1.123 so we can just enter the IP address and the port number, like so: 192.168.1.123:51820. Lastly, save the file.

We'll then reverse this process on the 'client', assigning it a different IP address on the same network, such as 192.168.2.10/24 and using the clients PrivateKey and the server's Public Key. Once we've completed the configuration files on both device, we can use the command wg-quick up wg0, telling wireguard to start (bring up) the wg0 interface we just configured.

Final Notes

It should be noted that the "Endpoint" setting is only required on one of the two peers. This can be useful if you want to use wireguard while you're away from your home. My "server" at home may keep the same public IP all the time, but my portable laptop will have a different public IP address every time I move to a new network. In this case, I would remove the "endpoint" setting entirely from the configuration file on the server, and only use that setting on the laptop. Remember, the one system needs to be able to connect to the other system over an existing network before wireguard can create the VPN connection between them. Most of the issues a user might encounter while trying to set up wireguard isn't actually related to wireguard itself, but rather the underlying network. Firewalls and Network Address Translation (NAT) are the most common causes of problems, but the steps to address these issues will vary significantly depending on your network situation.

Continue reading»

9 Dec 2024

My Proxmox Disaster Recovery Test (of the unplanned variety)

Welp, I lost another drive.

Homelab disasters are inevitable; expected even. Recently, a failed disk resulted in losing an entire node in my Proxmox cluster. While the initial shock was significant, a solid  backup/restore plan and a High Availability (HA) setup ensured a pretty swift recovery.

When I first started playing with my homelab, I was primarily using RPis and I lost a couple SD cards to corruption or failure. This helped to demonstrate to me the importance of regular backups, and I’ve made an effort to back up my homelab systems ever since. I now virtualize most of my servers via Proxmox and perform nightly backups to a local Proxmox Backup Server (PBS), which is then synchronized to an offsite PBS server. I’ve tested the restore function a few times and it seemed like a fairly straight-forward process.

For background, my current Proxmox cluster is comprised of three Lenovo Tiny SFF PCs. Two of those PCs are currently running only a single internal storage device which is used for both the boot and host partitions as well as storage for all of the guest OSes. This means that if a disk fails, it takes out the entire node and everything running on it.  

...Which is exactly what happened a couple weeks ago. Booting into BIOS showed that the drive failed so hard that the system didn’t even acknowledge there was a drive installed at all. The drive, by the way, was a Critical branded NVMe that I had purchased only two months prior. That’s just enough time to be outside of Amazon’s return period, yet significantly short of any reasonable life expectancy… but I digress. With the failing of the drive, I lost that node’s Proxmox host OS and all of the VMs and containers running on that node. For the HA enabled guests, they were automatically migrated to one of the remaining nodes, exactly as intended (yay!). For the non-HA guests, I had to manually restore them from PBS backup. I was quite pleased with how quick and easy it was to restore a guest from PBS. Everything could be done through the web gui in just a couple clicks. It’s obviously never fun to lose a disk, but PBS made the recovery pretty painless overall.


With the guests back up and running again, I removed the failed node from the cluster and purged any its remaining config files from /etc/pve/nodes.

For the failed node itself, I had to replace the drive and then reinstall Proxmox from scratch. From there, I pointed apt to my aptcacher-ng server and then ran a quick Post Install Script, before configuring my network devices and finally adding the “new” node back into the cluster. The whole process took only a couple hours (including troubleshooting and physically installing the new drive), and most of the hosted systems (such as this blog) were only offline for a handful of minutes, thanks to the High Availability set-up.

Needless to say, I was quite happy with my PBS experience.   
...And not so much my experience with Critical’s NVMe drives. 

Continue reading»

23 Nov 2024

Caching Apt with Apt-Cacher NG

It recently occurred to me that as I update each Linux container or VM, I'm downloading a lot of the same files over and over again.  While the downloads aren't huge, it still seems wasteful to request the same files from the repo mirrors so many times... So why not just download the update once and then distribute it locally to each of my systems?  

That's the purpose of a caching proxy.

I chose apt-cacher ng as it's very simple to setup and use, so I spun up a dedicated LXC and installed apt-cacher ng via apt. Once it was up and running, it was just a matter of following the included documentation to point all of my other systems to that cache.

After upgrading just a couple of systems, I can already see the cache doing it's job:

Those "hits" are requests that were able to be fulfilled locally from the cache instead of needing to download  the files from the repo again. Since this is caching every request, it actually becomes more efficient the more that it's used, so hopefully the efficiency will increase even more over time.

So what exactly is happening?

First, this is not a full mirror of the Debian repos. Rather, apt-cacher ng acts as a proxy and cache. When a local client system wants to perform an update, it requests the updated packages from apt-cacher instead of the Debian repo directly. If the updated package is available in apt-cacher's local cache already, it simply provides the package to the requesting client. If the package is not in the local cache, then the proxy requests the package from the repo, provides that package to the client, and then saves a copy of the package to the cache. Now it has a local copy in case another system requests the same package again.

Some packages, like Crowdsec, are only installed on a single machine on my network, so the cache won't provide a benefit there. However, since most of my systems are all running Debian, even through they may be running some different services,  they will still all request a lot of the same packages as each other every time they update, like openssh or Python.  These will only have to be downloaded the very first time they're requested, and all of the subsequent requests can be filled from the proxy's local cache.

Do you use a cache in your homelab? Let me know below!

Continue reading»

Pixel 8 Pro Modem Issues

Not much of a post here, but I figured I share an issue I've encountered with my Google Pixel 8 Pro.

A few weeks ago, my Pixel phone started reporting "no connection" and prompting to "insert SIM card."  I first tired removing the SIM card and cleaning it with some IPA, and then doing some common "fixes", such as resetting the network settings and toggling airplane mode on and off.  I did find that rebooting the phone entirely would remedy the problem temporarily, but that it would return hours or even mere minutes later.

I then began trying more drastic measures... I first requested an eSIM from my carrier. I thought that perhaps the SIM reader may have been physically damaged in some way, and thought that an eSIM might be the solution. Bizarrely, my carrier insisted that only iPhones support eSIMs and refused to provide me with an eSIM code to use, even after I explained that the Pixel phones do, in fact, support eSIMs (and have supported them even before Apple hardware did).  I was considering changing carriers, anyway, so I figured this was a good time to finally switch.  I signed up with a new carrier and received my new eSIM code right away.

I was expecting that I had finally found the solution to the problem, and was quite disappointed when a few hours after switching, I was greeted by the same "insert SIM" messages and lack of service.  

Now that it was clearly not a SIM issue, I started digging a little deeper. Enabling ADB, I was able to pull system logs from the phone via logcat. I found that there appeared to be a number of errors surrounding the cellular modem, which I suspect are responsible for the sudden connectivity issues. 

Here are the logs:

11-19 06:52:23.274 24777 24784 I modem_svc: State Monitor: Modem state changed from BOOTING to OFFLINE
11-19 06:52:23.287   949  1017 I modem_ml_svc: Poll timeout. interval=3000
11-19 06:52:23.288   949  1017 I modem_ml_svc: Modem state changed from 3 to 0
11-19 06:52:26.278 24777 24784 I modem_svc: Poll timeout. interval=3000
11-19 06:52:26.278 24777 24784 I modem_svc: State Monitor: Modem state changed from OFFLINE to BOOTING
11-19 06:52:26.290   949  1017 I modem_ml_svc: Poll timeout. interval=3000
11-19 06:52:26.290   949  1017 I modem_ml_svc: Modem state changed from 0 to 3
11-19 06:52:29.282 24777 24784 I modem_svc: Poll timeout. interval=500
11-19 06:52:29.294   949  1017 I modem_ml_svc: Poll timeout. interval=3000
11-19 06:52:32.288 24777 24784 I modem_svc: Poll timeout. interval=500
11-19 06:52:32.298   949  1017 I modem_ml_svc: Poll timeout. interval=3000
11-19 06:52:34.816  1089  1089 I cbd     : Picking unzipped modem image from /mnt/vendor/modem_img/images/default//modem.bin
11-19 06:52:34.816  1089  1089 I cbd     : Binary type specified path /mnt/vendor/modem_img/images/default//modem.bin is selected
11-19 06:52:34.811     1     1 W /system/bin/init: type=1107 audit(0.0:10975): uid=0 auid=4294967295 ses=4294967295 subj=u:r:init:s0 msg='avc:  denied  { set } for property=telephony.ril.modem_bin_status pid=425 uid=0 gid=0 scontext=u:r:vendor_init:s0 tcontext=u:object_r:default_prop:s0 tclass=property_service permissive=0' bug=b/315104235
11-19 06:52:34.819  1089  1089 I cbd     : BIN(/mnt/vendor/modem_img/images/default//modem.bin) opened (fd 0)
11-19 06:52:34.819   425   425 W libc    : Unable to set property "telephony.ril.modem_bin_status" to "4": PROP_ERROR_PERMISSION_DENIED (0x18)
11-19 06:52:34.820  1089  1089 I cbd     : CP binary file = /mnt/vendor/modem_img/images/default//modem.bin
11-19 06:52:34.820  1089  1089 I cbd     : CP REPLAY file = /mnt/vendor/modem_userdata/replay_region.bin
11-19 06:52:34.821  1089  1089 I cbd     : BIN(/mnt/vendor/modem_img/images/default//modem.bin) opened (fd 4)
11-19 06:52:34.823     1     1 W /system/bin/init: type=1107 audit(0.0:10976): uid=0 auid=4294967295 ses=4294967295 subj=u:r:init:s0 msg='avc:  denied  { set } for property=telephony.ril.modem_bin_status pid=425 uid=0 gid=0 scontext=u:r:vendor_init:s0 tcontext=u:object_r:default_prop:s0 tclass=property_service permissive=0' bug=b/315104235
11-19 06:52:34.827   425   425 W libc    : Unable to set property "telephony.ril.modem_bin_status" to "4": PROP_ERROR_PERMISSION_DENIED (0x18)

What you see above just kept repeating.

I don't know very much about Android specifically, but I don't think I should be seeing system processes returning permission errors.  I recall from back in the Nexus days, that the radio images usually had their own separate partition, so I briefly looked into whether I could reflash the radio partition, but I didn't see any factory images available. Perhaps that's not how the devices function any more.

In any case, I ultimately decided that it might just be easier to contact Pixel Support, since my phone is still under warranty.
I chatted with a support agent for a couple minutes and explained my problem and the troubleshooting steps I had tried. Not long after, I had a replacement unit ordered. I received confirmation from FedEx the next day that I had an overnight delivery inbound.  

This isn't the first time that I've had unsolvable issues with a Google phone. In fact, my Nexus 6P, Pixel 7 Pro and my current Pixel 8 Pro will have all been replaced at least once now. The only Google phone that never had an issue was my Pixel 5, which ironically I believe was also the cheapest. That said, Pixel Support has generally been helpful and willing to send a replacement device without much fuss. Their Advanced Exchange program even allows several weeks to transfer things to the replacement device before the old device needs to be sent back. 

I really wish that Essential had made a sequel to the PH1, as that was an excellent phone with a very clean version of Android. In some ways, the PinePhone feels like a bit of a spiritual successor, though, particularly with its expandability potential.   Maybe it'll finally be a viable alternative to the Pixel line for someone who refuses to touch another Samsung phone.     

Continue reading»

17 Nov 2024

Cities Skylines 2 - Skyve Install in Linux

Ah, Cities Skylines 2:

I recently got back into Cities Skylines 2 after leaving the game for a while due to the release of PDX Mods essentially breaking all of the mods I had been using via r2modman at the time.

Now  that some time has passed, I decided to give it another go. I was quite interested to see that the Skyve mod manager has now come to C:S2, meaning that I didn't actually have to directly interact with PDX mods and could use a proper mod manager instead. Installing Skyve is supposed to be a two-step process, first you install the Skyve "mod", which is essentially just an installer, then you use that installer to install the actual Skyve program. Once installed, Skyve is a free-standing application that interacts with the C:S2 data without requiring steam or C:S2 to be running at the time. Not only can Skyve install and uninstall mods without needing to launch C:S2, but it also alerts you when other users have flagged a mod as broken or incompatible with another mod you have installed. 

Unfortunately, while C:S2 runs beautifully on Linux without any additional configuration (likely better than it does on Windows), Skyve was a different story. Skyve requires the Microsoft dotnet framework and doesn't appear compatible the opensource alternative Mono, which is commonly what would be used on Linux. It took a bit of trial and error to get Skyve running, so I thought I would share the process which ultimately worked for me:

The first step is simply installing the Skyve "mod" via PDX Mods. After installing the Skyve mod and restarting C:S2 twice, the "Install Skyve" button appeared in the menu as it was supposed to and clicking on it did bring up an installer interface. The installer appeared to run, but ended with an error message.  I switched from Proton Experimental to Proton GE 9.20, using ProtonUP to download and active the new Proton version. ProtonUp isn't needed, but it does make the process very simple.

After switching, the installer ran without any errors, but Skyve itself would not start.

Next, I used ProtonTricks to install the .Net 4.8 framework:

Open ProtonTricks and select the game:

Then select the "default" Wine prefix:

Then install component:

Then select .Net 4.8 (other 4.x versions might work):

That installed .Net, but when I tried to launch Skyve, I received an error about the .Net "RootInstall" registry not being found, so my next step was to install that:

In ProtonTricks, select "Run regedit"

Once in regedit, I navigated to

HKEY_LOCAL_MACHINE/Software/Microsoft/.NETFramework

 

There, I created the missing registry key, pointing to the .Net framework path:

 

Finally, my last step was to run Skyve.
I ran the binary via ProtonTricks' application launcher, which ensures that the program is run in the correct prefix. Skyve started and immediately recognized the C:S2 install and correctly listed all of my current mods and even suggested a few I should remove. After confirming that I wanted to remove the mod, I booted up C:S2 and found that the mod had been successfully removed. I tried installing a new mod as well, and that worked exactly as intended.

Hopefully this might be helpful to someone else who finds themselves struggling to get Skyve running.

Thanks for reading!

 

Continue reading»

1 Nov 2024

New Router: BananaPi R3 - Part 3 - Configuration

After being subjected to numerous mean glares from the wife and accusations of "breaking the internet", I think I've got it all configured now...

https://www.k3can.us/garfield.gif

Ironically, the more "exotic" configuration, like the multiple VPN and VLAN interfaces were pretty simple to set up and worked without much fuss. The part that had me pulling my hair and banging my head against the desk was just trying to get the wifi working on 2.4GHz and 5GHz at the same time... Something wireless routers have been doing flawlessly for over a decade. After enough troubleshooting, googling, and experimenting, though, I had a working router.

I installed AdGuardHome for dns and ad blocking, but kept dnsmasq for DHCP and local rDNS/PRT requests. dsnmasq's DHCP options directs all clients to AGH, and AGH fowards any PTR requests it recessives to dsnmasq.

Next, I installed a Crowdsec bouncer and linked it to my Local Crowdsec Engine. Now, when a scenario is triggered, instead of banning the offending address at the server, it will be blocked at the edge router level instead. 

 

Lastly I installed and configured SQM (Smart Queue Management), which controls the flow of traffic through the router to the WAN interface. Without this management, the WAN interface buffer can get "bogged down" with heavy traffic loads and cause other devices to experience high latency or even lose their connection entirely. SQM performs automatic network scheduling, active queue management, traffic shaping, rate limiting, and QoS prioritization.

For a comparison, I used waveform to test latency under load.

Before SQM:

====== RESULTS SUMMARY ====== 	
Bufferbloat Grade	C
	
====== RESULTS SUMMARY ====== 	
Mean Unloaded Latency (ms)	51.34
Increase In Mean Latency During Download Test (ms)	76.01
Increase In Mean During Upload Test (ms)	8.69

After SQM:

====== RESULTS SUMMARY ====== 	
Bufferbloat Grade	A
	
====== RESULTS SUMMARY ====== 	
Mean Unloaded Latency (ms)	38.92
Increase In Mean Latency During Download Test (ms)	12.75
Increase In Mean During Upload Test (ms)	1.5

I have to say, I'm pretty happy with the results!
Going from a grade C with a 76 ms increase to a grade A with only a 12.75ms is a pretty substantial difference. This does increase the load on the CPU, but with the BPI R3s quad core processor, I expect that I'll still have plenty of overhead.

Overall, I think I'm happy with the configuration and the BPI R3 itself.

Continue reading»

27 Oct 2024

Troubleshooting a minor network issue.

So, while surfing the web and reading up on how I wanted to configure my new BananaPi R3, I encountered a sudden issue with my network connection:

It seemed like I could reach the internet, but I suddenly lost access to my home network services. I tried to SSH into my webserver and was met with a "Permission denied" error.  Since I had earlier attached an additional USB NIC to connect to the BPI, I thought that perhaps the laptop had gotten the interfaces confused and was no longer tagging packets with the correct VLAN ID for my home network. Most of my servers are configured to refuse any requests that don't come from the management VLAN, so this explanation made sense. After poking around in the network settings, all of the VLAN settings appeared correct, but I did notice that the link was negotiated at 100 mbs, instead of the usual 1000. I tired to reconfigure my network settings, manually setting the link to 1000 mbs, resetting interfaces, changing network priorities, etc.  I then tired the classic "reboot and pray" technique, only to find that my wired network connect was down entirely. I wasn't receiving an IP from the DHCP server and the laptop kept reporting that the interface was UP, then DOWN, then UP again, then DOWN again.

Now I started to think that perhaps the issue was hardware related. My usual NIC is built into the laptop's dock, so I thought I would try power cycling the dock itself. This didn't seem to have any effect, besides screwing up my monitors placement and orientation. My next thought was that there might be an issue with the patch cable. "Fast Ethernet" (100mbs) can theoretically function on a damaged cable, so that might explain the lower link speed, and if the damage is causing an intermittent issue, that could also explain the up/down/up/down behavior.  

Being the smart homelabber that I am, I disconnected both sides of the cable and connected my ethernet cable tester. All 8 wires showed proper continuity, though, suggesting that the cable was fine.  When I plugged the cable back in, however, I noticed that I was suddenly getting the normal 1gbe again, but the link was still going up and down. This lead me to the conclusion that the cable likely was the issue, despite it passing the cable test.  I tried replacing the cable entirely with a different one, and found that I now had a stable, 1gbe connection, an IP addresses, and I could now access my network like usual.

Looking back, I think replacing the cable should have been troubleshooting step 1 or 2.

Also, in retrospect, there were some clues that might have let me fix the issue before this point, if I had only put the pieces together. I had noticed another day that the link speed had dropped to 100mbs, but it seemed to correct itself, so I ignored it instead of investigating. While working, I found that Zoom and other applications had started to report my internet connection as "unstable" and that a "speedtest" showed that my internet bandwidth to be drastically lower than it used to be. I assumed this was due to my ISP just being unreliable, since reduced speeds and entire outages are not unusual where I live. 

In hindsight, I think these were all indications that there was a level 1 issue in my network. In the future, I'll have to remember to not over-think things, and maybe just try the simplest solutions first.

 

 

Continue reading»

New Router: BananaPi R3 - Part 2 - Flashing

Part 1 is here.

Now that the router is assembled, the next step is to decide where to flash the firmware. As I mentioned in the last post, this device offers a handful of options. Firmware can be flashed to the NOR, NAND, eMMC, or simply run from the SD card. From what I've read, it's not possible to boot from an m.2 card, though. That option is only for mass storage.

After a bit of reading, my decision was ultimately to install to all four!  Sort of...

Image showing the leads connected the UART connector and the DIP switches
The DIP switches and the leads connecting to the UART

My plan is to install a "clean" OpenWRT image to both the NOR and NAND.  The NAND image will be fully configured into a working image, and then copied to the eMMC. The eMMC will then be the primary boot device going forward.  If there's a problem with the primary image in the future, I would then have a cascading series of recovery images available. At the flip of a switch, I can revert to the known working image in the NAND, and if that fails, then I can fallback to the perfectly clean image in the NOR.

..And I do mean "at the flip of a switch". Due to the way that the bootable storage options are connected, only 2 of the 4 can be accessed at a time. Switching between NOR/NAND and SD/eMMC requires powering off the BPI and toggling a series of 4 dip switches, as seen on the official graphic below:

https://wiki.banana-pi.org/images/thumb/4/4c/BPI-R3-Jumper.png/320x107x320px-BPI-R3-Jumper.png.pagespeed.ic.Qyd9EK01n9.png

Switches A and B determine which device the BPI will attempt to boot from. Switch C sets whether the NOR or NAND will be connected and switch D does  the same for the SD and eMMC. To copy a image from the SD card to the NOR, for example, switch D must be high (1) to access the SD card, and switch C must be low (0) to access the NOR.  Since switches A and B set the boot device independent of which devices are actually connected, it would seem that you could set them to an impossible state to render the device unbootable, like 1110, or 1001. 

To accomplish my desired install state, I had to first write the OpenWRT image to the SD card on a PC, and then insert it into the RPI. With the the switches to 1101, I could write the image from the SD card to the NOR, then flip the switches (with the BPI powered off) to 1111 to copy the image to the NAND. Lastly, I can remove the SD card and reboot with the switches in 1010 to boot from the NAND. Then I'll configure the BPI into my fully configured state. This is the step I'm currently working on. I have it about 80% configured, but will need to actually install it in my network before I can complete and test the remaining 20%.  Once it is installed, tested, and fully configured, I'll  copy the NAND to eMMC, before finally setting the switches to 0110 and booting from the eMMC for ongoing use.

Unfortunately, I haven't had a good opportunity to take my network offline to install the new router, so the last bit of configuration might need to wait a little while...

 

Continue reading»

22 Oct 2024

New Router: BananaPi R3 - Part 1 - Hardware

I've been using a consumer router from from 2016 (with OpenWRT hacked onto it) all the way here in 2024, and felt that it might finally be time for an upgrade. I settled on a BananaPi R3 because it was a reasonable price and seemed like it would be a fun project.

Here's the bare board as received:

You can see most of the physical features in this photo, including a USB3 port, two SFP ports, 5 RJ45 ports, an m.2 slot for a cellular modem, and a 26-pin GPIO header. On the bottom, there's also a m.2 slot intended for nvme storage, as well as slots for a micro SD and a micro SIM.  The CPU is a quad-core ARM chip paired with 2gb of RAM, and there's a handful of flash chips, providing NAND, NOR and eMMC.   Quite a lot of options!

My plan is to install OpenWRT to the NAND storage. I suspect the nvme might be useful if I wanted to run a small file server or something, but that's not in the plan for now.

 

The first step I took in assembly was to apply some thermal pads to the chips and then attach a cooler and fan.

The thermal pads are "OwlTree" brand, but I don't have any specific preference to them, I just happen to already have them on-hand from a previous project. The CPU is a 0.5mm pad applied, and I applied 1.5mm pads to the remaining chips.

Thermal pad applied to CPU

After applying  pads to all of the chips, I attached the cooler and plugged in the fan.

The next step was to install the board into the case. I went with the official BPI-R3 case. The quality is surprisingly nice and looks great once assembled. After installing the board I then installed the pigtails for the eight (yes, eight) antennas and applied some basic cable management.

Board installed into case and coax attached and routed to antenna ports.  

Now, I can't finish putting the case together quite yet, since I'll need access to the UART pins to install Openwrt to the NAND flash. The  UART header can be seen on the right side of this photo, but there is no way to access it once the case is assembled.

But, that's enough for today. I'll post an update once I make some progress towards getting OpenWRT flashed.

Continue reading»

20 Oct 2024

Wavelog

I posted a while back about online amateur radio logging (here) and wanted to do a follow up on some of the client software I've been using to log contacts and sync to both LOTW and QRZ.

I used to use CQRLOG, which is still a great logger, but it is primarily intended to be installed directly on the PC you're logging from. If you want to view QSOs or add a new entry, you need to have access to that PC.
For greater flexibility, there are now self-hosted server-based logging platforms which allow access from any device with a web browser and internet access, such as Wavelog and Cloudlog. I'm going to refer to Wavelog throughout this post, but the features are similar between them both Wavelog and Cloudlog (the former being a fork of the latter). 


Wavelog is a PHP application intended for a LAMP stack, although it actually works on a variety of systems: more on that at the end.

To start, here's a quick screenshot of the QSO entry, on both desktop and mobile:

At a glance, you can see all of the details of the station as soon as you enter the callsign. If the station has a QRZ profile, it will even display their profile photo!

Once you start building your logs, Wavelog can create a number of reports and analytics based on your logbooks, such as:

Or band usage:

It can also track progress towards awards, such as:

 

One of the features that I particularly enjoy is that you can upload QSL cards into your logbook, attached to the QSO entry: 

It can also automatically retrieve eQSL cards, as well!  As I mentioned before, Wavelog can sync entries to LOTW, QRZ, eQSL and more, and will show you the confirmation status for each.

If you happen to use Gridtracker, it can log directly to Wavelog. There is also a small helper application which will allow you to log directly from wsjtx and automatically sync your qso entry page to your radio via FLRig (freq, band, etc).

 

I've been running a Cloudlog server for both myself and my father until about a month or two ago when I switched to Wavelog. The migration from one to the other was fairly straight forward, and the Wavelog wiki provides instructions.  I'm running Wavelog on an Nginx webserver with a MariaDB backend, but there is also a Docker image if you want something simpler.  I did test Cloudlog on a lighttpd server, as well, and encountered no problems.  

So far, I'm happy to report that Wavelog has been quite reliable. The only downtime I've encountered is when I've accidentally broken the webserver itself, and that's certainly not the fault of Wavelog.

If you're looking to try a new logging solution, Wavelog would get a strong recommendation from me.

Check it out at Wavelog.org

 

Continue reading»

14 Oct 2022

Steamdeck + Ham radio = SteamedHamDeck?

Continue reading»

8 Jul 2019

Online Amateur Radio Logging

Logging amateur radio activity has been around about as long as amateur radio itself. At one time, it was required that all amateur stations kept a log of their activity. Now, it's no longer required, but it is still a very common practice.
Many people enjoy trying to contact a variety of different geographical areas, such as each of the United States, various different countries, or specific zones or grids. The easiest way to keep track of this is with a station log, particularly a modern online logbook.
There are a number of different online log books, but they generally have the same core features. They record your contacts, confirm your contacts, and track awards.

The first part is simple. You enter the station you contacted, the time of the contact, and the frequency or band. The system saves this info and you can review it from any web browser. Many logs will also let you record a little info about yourself, and show you the info added by other users.

The second part, confirmation, is done by cross referencing your log book with the other station's log book. So if I entered into my log that I contacted VE3XYY on 10 Jun, at 17:45, on 40m, the online system will then check the log of VE3XYY to see if he recorded the contact as well. If both stations recorded it, then it's considered "confirmed," commonly called a "QSL."  It's important to note that in order for this confirmation to happen, both stations need to be using the same online log.

Lastly, many online log books also offer an award system. Common awards are "Worked all States," "Worked all Zones," and "DXCC." To earn an award, you must meet the criteria with confirmed contacts. To earn "Worked all States," for example, you must have a confirmed contact with an operator in each of the 50 United States. This is where choosing the right logbook becomes important; because only confirmed contacts count towards these awards, choosing a popular logbook increases the chances that the other station is using the same log, and thus allows for a confirmed contact.   

The three most common online logs seem to be ARRL's Log Book of the World, QRZ.com, and eQSL.

Personally, eQSL is my least favorite. Personally, eQSL is my least favorite. I find the outdated interface to be confusing. While I enjoy the esthetic (see my webpage), it doesn't make for an easy logging experience. eQSL does go a little beyond just recording your logs, though, as the website also allows trading of "electronic QSL cards."  This is fun, but it requires a subscription fee if you want to use a custom card or apply for awards.

The Logbook of the World (LOTW) is the ARRL's official online logbook.  Rather than allowing you to enter contacts directly, it requires you to install a program on your computer, as well as submitting a copy of your FCC license. This is done to prevent fake accounts, but the hassle may also discourage some operators from using the service. I have heard that this system is particularly difficult for operators in other countries, so it may not be a good choice if you intend to make a lot of long distance contacts. LOTW also has an outdated-looking interface. Luckily, since you're required to use a program on your computer to submit contacts, you don't have to actually visit the webpage very often.

QRZ.com is my personal favorite. It has the added bonus of also accepting confirmations made by the Logbook of the World, so it greatly increases your chances of getting a confirmation on a contact. It also has the most modern looking interface of the three, allowing you add and review entries directly on the website, create and view profile pages, and even post messages in an online community. Here's my page. I also believe this is the only site of the three that doesn't charge a fee to apply for awards. It has a full API, as well, so you sync logs from a variety of desktop applications.

You don't have to choose only one, and there are many others out there as well, such a clublog, hamqth, hrdlog, and hamlog. Choosing a popular one is you best chance at receiving "confirmed" contacts, but if you're just logging for your own record, then simply choose the one you find most comfortable.

73!

Continue reading»

2 Jul 2019

Field Day Yardsale Find

On the first morning of ARRL Field Day, I got a quick and very blurry photo from a relative of what appeared to be an older oscilloscope or other test equipment. She had found it at a local yard sale and thought it might be something I was interested in.
She was right.

After quickly getting dressed, I headed out to see this item for myself. It turns out that it was, in fact, a 1970s radio communications receiver. This receiver was sold under the Sears brand, but was actually a Yaesu FRG-7. This receiver can receive AM, CW and SSB from 500kHz to 30MHz. Unlike modern ham or SWL receivers, there's no digital display. Tuning in to a signal actually takes a combination of 4 different dials.  Regardless, it cost me $3, it  works, and it has a certain charm to it.

Continue reading»

Yaesu to PC Headset Adaptor

I've been using a "Gaming" PC headset as a pair of radio headphones since I purchased the headset some time ago, but it's always bothered me that I couldn't use the headset's microphone with my FT-897...
Then I found N1GY's simple schematics for a PC Headset to Yaesu Adaptor! After purchasing a new soldering iron (actually, two; don't Amazon if you're sleep deprived), I snagged the few components I needed from some scrap circuit boards and set to work.
Here's my end result. I used a small enclosure purchased from Radio Shack (before they went out of business), and added a simple red button as a PTT.

It's seems to work quite well, although I needed to reset all of my gain settings. I've also found that a hands-free headset, combined with VOX transmitting, makes for a lovely and simple ham radio experience. My hands are free to type, or I can just lay back and make contacts. Speaking of typing, I plan to cover my logging software and logbooks in a future post.

Continue reading»