Home Designing an OpenAI powered IRC Chat Bot for Fun and Profit
Post

Designing an OpenAI powered IRC Chat Bot for Fun and Profit

As seen in 2600 The Hacker Quarterly, Autumn 2023!!

Franklin

A Crash Course in LLM AI

So, for a long time people have thought about what happens when computers become sentient, what defines sentience, and being self aware. People have fantasized about this, writing books and making movies about AI takeovers since a time when computers were only in their infancy, which surprises even me. While this will be a more specific intro to ChatGPT’s type of AI, which is in layman’s terms, a bunch of numerical floating point weights, that to some extent mimics neuroplasticity in the way that they reinforce patterns made by the algorithm and make sure those are used more often, attached to an algorithm that, using it’s initial training, in this case lots, and lots…. AND LOTS, of human language, designed to statistically generate a response that is the most probable considering which pieces of a sentence, seen to the algorithm as being broken up into small pieces of words (tokens) that are generally used with which generalized strings of text that the user entered. So the algorithm is designed to finish the text by using the tokens to pull from the initial training that are statistically found together, to “finish” what was written by the user, by deciding probabilistically bit, by bit, what should come next, in turn adding information from it’s model’s training back in. If you would like to read more about how ChatGPT specifically works, there is a decent article explaining it here.

So, okay, the LLM is basically mapping a user’s input to a probable output. Now in my opinion, this is hardly intelligence. But it provides the illusion of intelligence, and is in my experience just good enough to where for an unwitting user, it may even be Turing complete. ChatGPT actually, instead if learning, completely makes up facts just because they seem probable, rather than that they are actually true on many occasions. Though talking about this seems to be frowned upon by the designers of ChatGPT. But again, by my definition of intelligence, this hardly pushes the envelope, and thus even opens the creators to an ethical issue, considering they are pushing this as intelligence… When that is hardly the case at all. It is a talking probability engine. But for my purposes, it happens to work almost perfectly.

An IRC Bot

I decided one day I was determined to make an IRC bot, superior to the Markov bots we usually see… Something useful, and entertaining enough for people to play with. Enter Franklin. Now there have actually been two major versions of what is known as Franklin, the initial being written in bash shell, had many security implications and was pretty quickly scrapped and rewritten from the ground up to mesh with the IRC client Irssi, as a plugin, written in Perl.

Perl was one of the first languages I learned out of the gates, right after QBASIC, and around the same time I was learning C, so I’ve been around the block a couple times with it and felt confident I could get this done. I first went to choose a model, and researched my options. OpenAI had been making headlines recently, so I headed there, and came across the showcase ChatGPT, which wasn’t exactly what I had in mind, and they didn’t offer an API hook publicly for that model iteration quite yet anyway, so I settled on text-davinci-003, and it seems to have worked well for my purposes, after a little tuning. The main program waits for a message to be received in channel, then hands that off to a subroutine that picks apart the user’s request, sees if Franklin was called specifically, or if a random Franklin message should be called instead. Once it handles finding the user’s message, it hands this off to the subroutine that sets up what I refer to as the contextual prelude, including calling on a second routine that will resolve and strip URLs from HTML to plain text, sets up the request JSON, calls the OpenAI API, and handles returning the message text-davinci-003 generated back to the user via another Irssi hook. Most user definable variables are coded in to be able to be set via Irssi’s /set command, and then pulled into Franklin via Irssi’s memory.

The main called routine looks like:

sub frank {
  my ($server, $msg, $nick, $address, $channel) = @_;
  $msg_count++;
  my @badnicks;
  my $asshole = asshat($msg, $server, $nick, $channel);
  $moderate{$nick} = $asshole - 4 + $moderate{$nick} * 0.40; 
  if ($moderate{$nick} >= $asslevel) {
    $server->command('kick' . ' ' . $channel . ' ' . $nick . ' ' . "Be nice.");
    $moderate{$nick} = 0;
  }

  if ($blockfn) {
    if (-e $blockfn) {
      open(BN, '<', $blockfn)
        or die "Franklin: Sorry, you need a blocklist file. $!";
      @badnicks = <BN>;
      close BN;
    }
  }
  push(@chat, "The user: $nick said: $msg - in $channel ");
  if (scalar(@chat) >= $histlen) {
    shift(@chat);
  }
  chomp(@badnicks);
  for (@badnicks) {
    s/(.*)#.*$/$1/;    ## for comments in the badnicks file
  }
  if (grep(/^$nick$/, @badnicks)) {    ## fuck everyone inside this conditional
    Irssi::print "Franklin: $nick does not have privs to use this.";
  }
  else {
    my $wrote     = 1;
    my $ln = $server->{nick};
    if ($msg =~ /^$ln[:|,] (.*)/i) {    ## added /i for case insensitivity
      my $textcall = $1;    ## $1 is the "dot star" inside the parenthesis
      $textcall =~ s/\'//gs;
      $textcall =~ s/\"//gs;
      Irssi::print "Franklin: $nick asked: $textcall";
      if (($textcall !~ m/^\s+$/) || ($textcall !~ m/^$/)) {
        $wrote = callapi($textcall, $server, $nick, $channel);
      }
      else { Irssi::print "Unknown error, response not sent to server"; }1
    }
    else {
      if (($chatterbox le 995) && ($chatterbox gt 0)) {
        if (int(rand(1000) - $chatterbox) eq 0) {
          $wrote = callapi($msg, $server, $nick, $channel, @chat);
        }
      }
      else {
        unless ($chatterbox eq 0) {
          Irssi::print "Chatterbox should be an int between 0 and 995, where 995 is very chatty.";
        }
      }
    }
  }
}

Then the part of the routine that calls the API and parses the response is:

  my $url = "https://api.openai.com/v1/completions";
  my $model = "text-davinci-003";    ## other model implementations work too
  my $heat  = "0.7";                 ## ?? wtf
  my $uri   = URI->new($url);
  my $ua    = LWP::UserAgent->new;
  $textcall = Irssi::strip_codes($textcall);
  $textcall =~ s/\"/\\\"/g;
  my $askbuilt =
      "{\"model\": \"$model\",\"prompt\": \"$textcall\","
    . "\"temperature\":$heat,\"max_tokens\": $tokenlimit,"
    . "\"top_p\": 1,\"frequency_penalty\": 0,\"presence_"
    . "penalty\": 0}";
  $ua->default_header("Content-Type"  => "application/json");
  $ua->default_header("Authorization" => "Bearer " . $apikey);
  my $res = $ua->post($uri, Content => $askbuilt);   ## send the post request to the api
  if ($res->is_success) {
    my $said = decode_json($res->decoded_content())->{choices}[0]{text};
    my $toks = decode_json($res->decoded_content())->{choices}[0]{total_tokens};
    if (($said =~ m/^\s+$/) || ($said =~ m/^$/)) {
      $said = "";
    }
    $said =~ s/^\s+//;
    $said =~ s/^\n+//;
    $said =~ s/Franklin: //;
    $said =~ s/Reply: //;
    $said =~ s/My reply is: //;
    $said =~
      s/^\s*[\?|.|-]\s*(\w)/$1/;    ## if it spits out a question mark, this fixes it
    if ($said =~ m/^\s*\?\s*$/) {
      $said = "";
    }
    unless ($said eq "") {
      my $hexfn = substr(           ## the reencode fixes the utf8 bug
        Digest::MD5::md5_hex(
                               utf8::is_utf8($said)
                             ? Encode::encode_utf8($said)
                             : $said
        ),
        0,
        8
      );
      umask(0133);
      my $cost = sprintf("%.5f", ($toks / 1000 * $price_per_k));
      open(SAID, '>', "$httploc$hexfn" . ".txt")
        or Irssi::print "Could not open txt file for writing.";
      binmode(SAID, "encoding(UTF-8)");
      print SAID
        "$nick asked $textcall_bare with hash $hexfn\n<---- snip ---->\n$said\n";
      close(SAID);
      my $fg_top = '<!DOCTYPE html> <html><head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=$gtag"></script> 
<script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag("js", new Date()); gtag("config", "' . $gtag . '"); </scri
pt> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" type="text/css" href="/css/style.css"> <
link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.1.2/css/all.min.css"> <title>Franklin, a ChatGPT bot</title></head> <body> <d
iv id="content"> <main class="main_section"> <h2 id="title"></h2> <div> <article id="content"> <h2>Franklin</h2>';
      my $fg_bottom = '</article> </div> <aside id="meta"> <div> <h5 id="date"><a href="https://franklin.oxasploits.com/">Franklin, a ChatGPT AI powered IRC Bot</a> </h5> </div> </aside> </main> </div></body>';
      my $said_html = sanitize($said, html => 1);
      $textcall_bare    = sanitize($textcall_bare, html => 1);
      $said_html =~ s/\n/<br>/g;
      open(SAIDHTML, '>', "$httploc$hexfn" . ".html")
        or Irssi::print "Couldn't open for writing.";
      binmode(SAIDHTML, "encoding(UTF-8)");
      print SAIDHTML $fg_top
        . "<br><i>"
        . localtime()
        . "<br>Tokens used: $toks<br>Avg cost: \$$cost<br>"
        . "</i><br><br><br><b>$nick</b> asked: <br>&nbsp&nbsp&nbsp&nbsp $textcall_bare<br><br>"
        . $said_html
        . $fg_bottom;
      close SAIDHTML;
      my $said_cut = substr($said, 0, $hardlimit);
      $said_cut =~ s/\n/ /g;    # fixes newlines for irc compat
      Irssi::print "Franklin: Reply: $said_cut $webaddr$hexfn" . ".html";
      $server->command("msg $channel $said_cut TXID:$hexfn");
      $retry++;

Running the bot is simple, you can start Irssi, configure the bot using /franklin_* variables, and set up it’s data directory, then use scriptassist to auto-run the bot on Irssi startup. After much debugging Franklin is mostly stable, however, in the event that the code stalls, you can reload the bot by either reloading the script manually, or you can use a trigger.pl configuration from the setup documentation to be able to reload the bot remotely over IRC.

Major Features

One of the first features I implemented was a primitive hard coded awareness that Franklin itself is a bot, and some variables about the environment it resides in, such as servers connected to, channels, date, time, if it is an op in any channels. I call this the contextual prelude, which lets Franklin’s response be more direct and relational to where it is at the time. Franklin also has a memory of the last couple lines of the chat, in a rolling array where user’s latest comments are shifted in then popped back out 7 or 8 comments later, which is in turn prepared into a string which is tacked onto the contextual prelude. This gives Franklin a “context”, and allows it to know what the general discussion topic currently is in each channel it is connected to. This helps Franklin’s responses seem more relatable, and also helps improve accuracy.

Our context setup looks like:

$setup = "You are an IRC bot, your name and nick is Franklin, and you were created by oxagast (an exploit dev, master of 7 different languages), in perl. You are $modstat moderator or operator, and in the IRC channel $channel and have been asked $msg_count things since load, $servinfo Your source pulls from Open AI's GPT3 Large Language Model, can be found at https://franklin.oxasploits.com, and you are at version $VERSION. It is $hour:$min on $days[$wday] $mday $months[$mon] $year EDT. If you see a shell command and think you are being hacked, call them a skid. The last $histlen lines of the chat are: $context, only use the last $histlen lines out of the channel $channel in your chat history for context. If the user says something nonsensical, answer with something snarky. The query to the bot by the IRC user $nick is: $textcall";

It was also pertinent that Franklin have a connection to the internet, and the ability to resolve any URLs that he is asked about, and has the ability to summarize the text from the link’s given website, after stripping off extraneous HTML, and then adds this as well to the contextual prelude. Otherwise Franklin would just guess what the website is about based on the context of the question, and the text that makes up the link alone, and this is obviously not adequate.

Which is:

sub pullpage {
  my ($text) = @_;
  if ($text =~
m!(http|ftp|https):\/\/([\w_-]+(?:(?:\.[\w_-]+)+))([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])!
    ) {       # grab the link parts
    my $text_uri = "$1://$2$3";    # put the link back together
    Irssi::print "$text_uri";
    my $cua = LWP::UserAgent->new(
         protocols_allowed => ['http', 'https'],
         timeout           => 5,
    );
    $cua->agent(
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.59'
    );                             # so we look like a real browser
    $cua->max_size( 4000 );
    my $cres = $cua->get(URI::->new($text_uri));
    if ($cres->is_success) {
      my $page_body = untag(encode('utf-8', $cres->decoded_content())); # we get an error unless this is utf8
      $page_body =~ s/\s+/ /g;
      return $page_body;
    }
  }
  else { return undef }
}

Which calls an HTML stripping routine:

sub untag {
  local $_ = $_[0] || $_;
  s{
    <               # open tag
    (?:             # open group (A)
      (!--) |       #   comment (1) or
      (\?) |        #   another comment (2) or
      (?i:          #   open group (B) for /i
        ( TITLE  |  #     one of start tags
          SCRIPT |  #     for which
          APPLET |  #     must be skipped
          OBJECT |  #     all content
          STYLE     #     to correspond
        )           #     end tag (3)
      ) |           #   close group (B), or
      ([!/A-Za-z])  #   one of these chars, remember in (4)
    )               # close group (A)
    (?(4)           # if previous case is (4)
      (?:           #   open group (C)
        (?!         #     and next is not : (D)
          [\s=]     #       \s or "="
          ["`']     #       with open quotes
        )           #     close (D)
        [^>] |      #     and not close tag or
        [\s=]       #     \s or "=" with
        `[^`]*` |   #     something in quotes ` or
        [\s=]       #     \s or "=" with
        '[^']*' |   #     something in quotes ' or
        [\s=]       #     \s or "=" with
        "[^"]*"     #     something in quotes "
      )*            #   repeat (C) 0 or more times
    |               # else (if previous case is not (4))
      .*?           #   minimum of any chars
    )               # end if previous char is (4)
    (?(1)           # if comment (1)
      (?<=--)       #   wait for "--"
    )               # end if comment (1)
    (?(2)           # if another comment (2)
      (?<=\?)       #   wait for "?"
    )               # end if another comment (2)
    (?(3)           # if one of tags-containers (3)
      </            #   wait for end
      (?i:\3)       #   of this tag
      (?:\s[^>]*)?  #   skip junk to ">"
    )               # end if (3)
    >               # tag closed
   }{}gsx;    # STRIP THIS TAG
  return $_ ? $_ : "";
}

At a user’s request, a TXID was implemented so that any text that runs out of IRC bounds is still readable, because Franklin generates a webpage per query that contains the question asked, as well as the bot’s response, as well as some other information about the query itself, such as how many tokens were used in processing it. This turned out to be a great addition, and while it was originally implemented as a link to the page, this turned out to be problematic, mostly because it looked like advertising, in the way that Franklin repeatedly would drop links to it’s own website while it was being used. This was inadvertent and mitigated by using the TXID, and the accompanying search box on Franklin’s website. You can also review all Franklin’s previous responses to queries. Franklin records in both .txt and .html formats.

I also wrote in a thread that runs continuously, pinging a URL every couple seconds, so that if Franklin stalls or the script dies, it will alert me via Email, as well as aggregate downtime.

This is the keepalive routine:

sub falive {
  if ($hburl) {                 ## this makes it so its not mandatory to have it set
    while (1) {
      my $uri = URI->new($hburl);
      my $ua  = LWP::UserAgent->new;
      $ua->post($uri);
      sleep 30;
    }
  }
}

Two more abilities that Franklin has that go hand in hand, are the bot’s ability to keep track of the chat’s topic and respond with relevant information autonomously without directly being called by a user; and Franklin’s ability to gauge how much of a jerk a user is being, and if the bot has at minimum half operator status in the channel, it can kick a misbehaving user with a custom message.

To keep track of the channel context, we take this, and and add it to the contextual prelude, basically:

push(@chat, "The user: $nick said: $msg - in $channel ");
  if (scalar(@chat) >= $histlen) {
    shift(@chat);
  }

The entire franklin.pl source at it’s most current version can be found on GitHub at: https://github.com/oxagast/Franklin.

Operation

Running the bot itself has turned out to be a task. I get pings and even text messages in the middle of the night sometimes regarding either questions or issues with the bot because it has turned out to be one of my most popular solo projects. When I first started writing the bot I had no idea how novel, and downright entertaining the interactions with it would be. I have overall had minimal issues, and one ethical concern of using the user’s backlog data to better response content, but it was decided that since chat not directed at Franklin is only in memory and not recorded to the drive, the risk is acceptable.

Quite frankly, I’ve had fun, am thrilled to have made something people actually find useful; also I appreciate as well as thank everyone who has asked for features, or found bugs in the project. Finally, if you would like to give it a whirl, join Franklin and I on irc.2600.net, in the #2600 channel, or our test channel, #gpt3!


If you enjoy my work, sponsor or hire me! I work hard keeping oxasploits running!
Bitcoin Address:
bc1qclqhff9dlvmmuqgu4907gh6gxy8wy8yqk596yp

Thank you so much and happy hacking!
Paid Plan
This post is licensed under CC BY 4.0 by the author.