Tag - gopher

Entries feed - Comments feed

17 Feb 2025

Ramblings on my digital life

Are you in control of your digital life, or is it controlling you?
Who decides what you see online?
What information is being collected about you?

Like many, I'm going through a phase of re-evaluating my interaction with social media and to some extent, computer technology as a whole. I'm not entirely "degoogling," like some are trying to do, rather I'm trying to tailor my digital life to me instead of the other way around. I don't want to be fed a "curated" stream content that some algorithm has decided that I'm most likely to "engage" with, but rather content I want contant that I want to see, in a format that I've chosen to see it in.

I want control.



Blogs

One step towards achieving this goal is what you're reading now. This blog. I decided on the software. I built the server that it's running on. I picked the ugly colors (unless you're reading this via gopher). I can delete posts, create new posts, or destroy the blog entirely. There are no trackers, ads, AI, or other garbage unless I decide that there should be. This doesn't get the readership that I would if I were writing this on Facebook, but the few people who do read it are reading it because they want to, and that's what's important.

I've also started following other blogs, too, some of which are listed in the "Links" section in the menu to the left. Reading personal blogs is great, because you're just reading what a person wrote. There's no algorithm, it's just delivering whatever they typed straight into your eyeballs.

Websites

I maintain my personal homepage and I've been visiting other personal websites. While I personally self-host my homepage, sites like Neocities are making it easy for individuals to create personal webpages without needing to manage a webserver themselves. Searching for new personal homepages can be a little difficult compared to massive SEO'd sites, but services like "webrings" can help you find other similar homepages and provide plenty of rabbit holes to travel down. My homepage is part of the "No AI" webring, whose membership consists of personal pages free of AI generated content.

Email

I still use gmail, it's true, but I now interact with it through Thunderbird. I know that doesn't remove all of the Google tracking and whatnot, but it does allow me to access my email in a presentation that I prefer, and it allows me to use PGP/GPG, which does prevent Google from reading my emails, if it's actually used. Unfortunately I don't receive many PGP encrypted emails, but I've published my public key and my domain supports WKD, so it's easy to access if anyone wants to use it. My address is just "me" @ this domain, by the way, if anyone wants to send something my way.


RSS Feeds

Another advantage to Thunderbird is the built-in RSS reader. Instead of visiting, say, reddit.com, I can import my favorite subreddits directly into Thunderbird as RSS feeds, as well as following other blogs, podcasts, forums, and so-on. No ads, no trackers, just the content I actually want to see.


Fediverse

Of course the Fediverse has to play a role. The "fediverse" is a network of social media applications that can all communicate with each other. It's sort of like if Facebook, Reddit, Snapchat, Youtube, ~~Twitter~~ "X", and so on were all interconnected… Except that each service is actually composed of thousands of individually-owned servers instead of a massive server farm owned by a single entity. If I decide to take down my server, my posts will eventually disappear but the fediverse will live on. No single entity controls the fediverse. Owning the server also means that I control the server. I can decide not to federate (interact with) other servers or limit which servers I want to federate with. I'm in control of my own social media.

SearXNG

"SearXNG is a metasearch engine, aggregating the results of other search engines while not storing information about its users." Essentially, instead of directly using Google or Bing, I search via my own SearNGX server which will create a new "identity" for every search, aggregate the results from dozens of other search engines, and then strip out all of the ads and sponsored results. That means I can search for things like "political news" and get results from all across the internet without getting blasted with advertisements for MAGA baseball caps for the next month.

Last thoughts

Breaking away from major social media and massive corporate entities isn't easy (or even totally feasible) I do feel a lot better knowing that I have al little more control over this aspect of my digital life.

14 Feb 2025

Lynx - A Text Based Web (and Gopher) Browser

What is it?

The Lynx Browser (not to be confused with the later Links Browser), is the oldest actively maintained web browser, initially developed in 1992 and still being maintained today. What makes lynx look a bit different from other modern browsers is the fact that even in 2025, it can only display basic text. The lynx browser ignores all of the ads, pictures, JavaScript, fancy formatting, and annoying infinitely-scrolling slop, to just delivery the content you want to read. Not only does this reduce distractions, but it's also great to use if your internet connection has limited bandwidth.

Using lynx

Lynx is included in the repos of most Linux distributions, so installing it is just a matter of running install lynx via your favorite package manager. Lynx runs in the terminal emulator, and can be started by the command lynx or you can directly open a specific website with lynx {website you want}.

Here's an example of Google.com:

google.com

And here's Wikipedia:

Wikipedia

The formatting has obviously changed quite a bit, but all of the content is still there. This makes lynx great for sites where you just want to be able to read the content quickly, without distractions. Lynx also supports the gopher protocol, and since gopher is a text-based service to begin with, browsing the gopherspace from lynx feels completely natural.

Super Dimensional Fortress's User's Gophersites:

Super Dimensional Fortress

Pitfalls

There are, however, a few downsides to accessing the modern W3 through a text-only browser. The things that many of us would like to avoid (ads, JavaScript, endlessly scrolling slop, etc) have been so deeply entrenched in some sites that the they simply can't function without it. You might think that this could be a great way to revisit pages that you used to enjoy, like Facebook, but here's how the modern Facebook (even m.facebook) looks on Lynx:

facebook

In fact, even some some simple websites might not display well in lynx built if they're built with certain older formatting tools, like frames. Here's my own website as an example. It's worth noting, though, that while my site doesn't look correct without displaying frames, it can still be fully navigated by using the FRAME: links at the top:

k3can's homepage

Thoughts

So, while lynx is unlikely to fully replace your graphical browser on the modern web, it's still surprisingly useful for focused reading, navigating gopherspace, and for situations with limited bandwidth. Installing lynx is simple and the entire browser is only 6 MiB in size, so it's a great tool to have on your system.

26 Jan 2025

Blog Mirroring to Gopher

This blog is also available via gopher! And here's how:

Gophernicus

As I mentioned in my previous post, my gopher server of choice is Gophernicus. One of the benefits of Gophernicus is that it will automatically generate gophermaps based on the files in it finds in a directory. This means that adding entries is as simple as dropping a file into the directory. The next time that server is accessed, the new file will appear automatically.

Mirroring

All that remains is finding a way to easily add files to the gopher directory. Since I already have this blog, I decided the first thing I wanted to do was to mirror these posts into the gopherspace. This blog uses Dotclear, which provides a plugin to access an RSS feed for the blog. By fetching (and then parsing) this feed, I can export the contents into text files accessible to gopher. I wrote a perl script to accomplish this and created systemd units to execute that script on a recurring schedule to pull in new entries. The full source code is available on my github. The script uses LWP:Protocol:https to fetch the RSS feed and XML::Feed to extract the title, date, and body of each entry. The date and title are used as the file name, and the body is reduced down to plain text and then written to the file.

If you'd like to use the script, it should, probably, maybe work with other RSS and ATOM feeds, but some feeds can be a bit loosey-goosey about how they handle their XML, so no guarantees.

18 Jan 2025

Let's Gopher!

I've decided it's time to get my gopher server running again, and wanted to outline the basic steps for anyone else looking to dive into the gopherhole.

install

I personally like gophernicus, a "modern full-featured (and hopefully) secure gopher daemon." Assuming that you're running a Debian server, we can install a simple gopher server via apt:

apt install gophernicus

installing screenshot

This should also install a socket unit and create a service template for gophernicus.

config

Gophernicus defaults to serving from either /var/gopher/ or /srv/gopher. On my install, the default config file is stored at /etc/defaults/gophernicus. I feel like /etc/defaults/ isn't used very often these days, but the intention is to provide a simple way for a developer to provide a default configuration for their application. We can create a new config somewhere reasonable, like /etc/gophernicus or just modify the existing file in /etc/default/ like a lunatic. I'm choosing the latter, of course.

The important thing to add is a hostname, but we can also turn off any features we don't want.

My config looks like OPTIONS=-r /srv/gopher -h gopher.k3can.us -nu.

The -nu option just disables the "personal" gopherspace (serving of ~user directories). Unless you know what this does and intend to use it, I'd suggest disabling it.

testing

We should now be able to start the service and access our empty gopherhole via systemctl start gophernicus.socket, which will create an instance of the gophernicus@.service unit. We can run a quick test by creating a file to serve and viewing the gophermap. To create the test file: touch /srv/gopher/test.txt, and then we can fetch the gophermap via telnet [ip address]-70.

 Trying 192.168.1.5...
Connected to 192.168.1.5.
Escape character is '^]'.

i[/]    TITLE   null.host   1
i       null.host   1
0hello-world.txt                       2025-Jan-18 09:47     0.1 KB /test.txt   gopher.k3can.us 70
.
Connection closed by foreign host.

That little jumble of text is our gohpermap. Lines starting with i indicate "information", while the 0 indicates a link to a text file. Gophernicus creates this map automagically by examining the content of the directory (although it also provides the option of creating a map by hand). To add files and folders, we can simply copy them into the /srv/gopher and gophernicus will update the map to include the new files.

From here, if we want to expose this publicly, can simply route/port-forward through our router.

In my case, though, I'm going to need to configure a few more components before I open it up to the public... First, I use (apparmor)[https://apparmor.net/] to limit application access, and second, my webserver lives behind a reverse proxy.

apparmor

For apparmor, I created a profile for gophernicus:

include <tunables/global>
# AppArmor policy for gophernicus
# by k3can

/usr/sbin/gophernicus {
  include <abstractions/base>
  include <abstractions/hosts_access>
  network inet stream,

  /etc/ld.so.cache r,
  /srv/gopher/ r,
  /srv/gopher/** r,
  /usr/bin/dash mrix,

}

This profile limits which resources gopernicus has access to. While gophernicus should be fairly secure as is, this will prevent it from accessing anything it shouldn't on the off-chance that it somehow becomes compromised. Apparmor is linked above if you want to get into the details, but I'm essentially telling it that gophernicus is allowed to read a number of commonly-needed files, run dash, and access its tcp stream. Gophernicus will then be denied access to anything not explicitly allowed above.

nginx

Lastly, to forward gopher through my reverse proxy, I added this to my nginx configuration:

#Gopher Stream Proxy

 stream {
     upstream gopher {
         server 192.168.1.5:70;
     }

     server {
              listen 70;
              proxy_pass    gopher;
     }
 }

Since nginx is primarily designed to proxy http trafic, we have to use the stream module to forward the raw TCP stream to the upstream (host) server. It's worth noting that as a TCP stream, ngnix isn't performing any virtual host matching, it's simply passing anything that come into port 70 on to the uptream server. This means that while I've defined a specific subdomain for the host, any subdomain will actually work as long as it comes into port 70; gopher://blog.k3can.us, gopher://www.k3can.us, and even gopher://sdkfjhskjghsrkuhfsef.k3can.us should all drop you into the same gopherhole.

access

While telnet will show you the gophermap, the intended way to traverse gopher is through a proper client application. For Linux, Lynx is a command-line based web-browser/gopher client available in most repos, and for Android, DiggieDog is available through the PlayStore.

next steps

Now all that's for me to do is add content. I used to run a script that would download my Mastodon posts and save them to a "phlog" (a gopher blog), and I could potentially mirror this blog to there, as well. That helped me keep the content fresh without needing to manually add files. I haven't quite decided if I want the gopherhole to primarily be a mirror of my other content, or if I want to be more intentional with what I put there.

Besides figuring out content, I'm also curious about parsing my gophernicus logs through Crowdsec. Unsurprisingly, there's not currently a parser available on the Crowdsec Hub, so this might take a little tinkering...