1 Feb 2025

A rambling on the topic of generative AI.

While trying to write the script mentioned in a prior post, I decided that I would use an XML parsing module of some sort rather than writing my own.  A quick google search turned up a number options, but I wasn't quite sure which would be best for my use-case.  I knew that I wanted to specifically read RSS feeds, so using a general purpose XML parser was probably overkill if there was a more RSS-focused option available. I found several RSS-focused modules to choose from, including XML::RSSLite; which sounded promising as a light-weight RSS-specific module. 

The documentation was rather sparse, though, and I thought I might get a better idea of how to use it if I could see some examples. When a google search didn't turn up much in the way of results, I decided I would just ask an AI to create an example for me.

In its example, it makes this call:

my $rss = XML::RSSlite->new();

...Which might look reasonable on the surface, but it didn't actually work. Now, instead of understanding how the module works by reviewing a working example, I get to learn by trying to troubleshoot something else's broken code. After actually checking the documentation, it seems that "XML::RSSlite" isn't an object and the module never uses a "new" method.   After I pointed this out, the AI gave it another go:

my $rss = XML::RSSlite->parse_url($feed_url);

Closer, I guess?  It still doesn't work, though, because it's still trying to call on objects and functions that don't exist. I pointed that out again and let it try one more:

my $rss = XML::RSSlite->new();

Really?

Despite trying to prompt it into a correct answer, it kept admitting that each answer was wrong and then produced the same incorrect code again and again.

For reference, here's how the module is intended to be used:

use XML::RSSLite;
parseRSS(\%result, \$content);
Yep. That's it. Super simple. One function that takes two arguments: a reference to an input scaler and an output hash.
 
It's a little baffling that the AI had this much trouble with such a simple request, but my guess is that it comes down to one fact: XML::RSSlite doesn't get a lot of use.
An AI "learns" by slurping up other examples and building a model based on what it sees. It doesn't actually "understand" any of it, it just knows that things generally happen a particular way.  Essentially, it just constructed something that looked like working code, without any knowledge of whether it would function or why. Even when it was told that it was wrong, it still produced the wrong code because it looked the most "right". It doesn't read man pages or reference documents to better understand how to answer a problem, it reads man pages to understand how a man page should look and reads answers to understand how an answer should look.
 
And I feel like this is really where the "AI is going to take over the world" thing gets a bit silly. The AIs of today can't create new things, they can only create variants of things they've already seen. They can write a story because they know what a story should look like.
They can create an image of a cat riding a Craftsman lawnmower, because they know what cats and Craftsman lawnmowers should look like.
 
I asked a Generative AI to "create an image of something entirely new, that has no analogs in the current world."
It produced an admittedly, very pretty, picture of a cave with a river running through it:
 
 
I'm sure that exact cave doesn't exist in the known world, but it's far from an entirely new concept of a thing.
On the other hand, human artists like H R Giger can create surreal nightmare bullshit that looks like nothing you've even seen or ever want to see again. 
 
I guess my conclusion is that If you're going to fear anything in the current world, it should be us humans.