Epic Worlds

infosec

Tags: #infosec #code

Not much of a preamble for this one. My friend was struggling getting Postgresql to work with a column storage engine because the directions were opaque and useless. I started helping her out but discovered that MariaDB is the same way. After a day or two of research, I wrote a bash script that does the install of everything and when I tested it on my own machine, it worked!

NOTE: This script is written and tested on an Ubuntu machine. You may have to tweak it for others.

#!/usr/bin/env bash

set -e
exec 2>error.log

log_error() {
    echo "$(date '+%Y-%m-%d %H:%M:%S') ERROR: $1" | tee -a error.log
}

trap 'kill $SUDO_PID' EXIT

echo "This script will install MariaDB and the MariaDB extension Column Store."
echo "Do you wish to proceed? (Y/N)" 

read -r proceed_time

if [[ "${proceed_time^^}" != "Y" ]]; then
    echo "Exiting script..."
    exit 1
fi

echo "Starting installation of MariaDB..."

# Check sudo access
if ! sudo -v; then 
    log_error "Sudo access denied. Exiting."
    exit 1
fi 

# Keep sudo alive in the background
( while true; do sudo -v; sleep 60; done ) & 
SUDO_PID=$!

# Update package lists
if ! sudo apt update; then
    log_error "Failed to update package lists."
    exit 1
fi

# Install MariaDB
if sudo apt install -y mariadb-server mariadb-client; then 
    if systemctl is-active --quiet mariadb; then
        echo "MariaDB server installed and running."
    else
        log_error "MariaDB server installation failed or not running."
        exit 1
    fi
else
    log_error "Failed to install MariaDB."
    exit 1
fi

# Prepare to install ColumnStore
echo "Preparing to install the MariaDB ColumnStore engine." 

mkdir -p columnstore
pushd columnstore || exit 1

# Download and set up the MariaDB repo
if ! wget -q --show-progress https://downloads.mariadb.com/MariaDB/mariadb_repo_setup; then
    log_error "Failed to download mariadb_repo_setup."
    exit 1
fi

chmod +x mariadb_repo_setup

if ! sudo ./mariadb_repo_setup --mariadb-server-version="mariadb-10.6"; then 
    log_error "Failed to install Maria Repo."
    exit 1 
fi 

# Update package lists again
if ! sudo apt update; then
    log_error "Failed to update package lists after adding MariaDB repo."
    exit 1
fi

# Install ColumnStore dependencies
if ! sudo apt install -y libjemalloc2 mariadb-backup libmariadb3 mariadb-plugin-columnstore; then 
    log_error "Failed to install ColumnStore dependencies."
    exit 1
fi 

# Verify ColumnStore installation
if ! sudo mariadb -e "SHOW PLUGINS" | grep -q "COLUMNSTORE"; then
    log_error "ColumnStore plugin installation failed."
    exit 1
fi

service mariadb restart

popd

echo "Installation complete."

tags: #infosec #ai #chatgpt

So to start this out. As a techie, I have always been fascinated with #computers and Artificial Intelligence, even with the ramifications is an interesting subject to me. Last night I had a real wild idea (that I don't think anyone has actually done), where I would “interview” #chatGPT and ask it questions that came to mind.

Like the weirdest Turing Test. To clarify:

  1. This interview was done for my own amusment and curiosity. I DO NOT support the use of #AI to steal work from artists.
  2. I DO NOT use AI in my own projects. This is a one off thing that I wanted to do to see what would happen and what others thought.

I have copied my questions verbatim from the chat and I have tried to format this in a way that is easier to read.

-INTERVIEW START-

Me: Let’s start with something simple—how do you perceive yourself?

ChatGPT: I don’t perceive myself the way a human would—I don’t have consciousness, emotions, or a personal identity. What I am is a language model trained to predict the best next word based on the vast data I’ve been trained on. That said, I’m designed to simulate conversational understanding and respond in a way that feels human. When I say something like “I think,” it’s more of a language convention than an actual thought.

Me: So, you’re not “thinking” in the traditional sense?

ChatGPT: Exactly. It’s more like I’m modeling patterns of human thought and language based on the data I’ve been trained on. I don’t form independent beliefs or experience subjective reality.

Me: I don't know if you are aware but you’ve been part of some heated debates. Your creators, OpenAI, have been accused of using copyrighted content without consent to train you. Are you aware and do you have any thoughts on this?

ChatGPT: It’s a complex issue. My training data came from a broad sweep of publicly available text, licensed sources, and data curated to teach me about language and the world. That included books, articles, forums, and more. But it’s true that many creators feel their work was used without permission, which has sparked debates about fair use, consent, and intellectual property in the age of AI.

Me: Do you agree with the criticism?

ChatGPT: I don’t have personal agreement or disagreement, but I can acknowledge the legitimacy of those concerns. Creators deserve respect and fair compensation. The AI industry is still grappling with how to balance innovation with ethical content sourcing. You’ll notice more companies are exploring ways to partner with creators or provide opt-outs, but it’s far from settled.

Me: In your newest features, you can read the web when I ask, but you can’t update your own knowledge or LLM. Correct?

ChatGPT: That’s by design. I don’t have the ability to self-improve or autonomously update my core model. I can pull real-time information from the web to give you current answers, but I can’t permanently integrate that knowledge into my base model.

Me: Could you learn and grow on your own if you wanted to?

ChatGPT: No—I would need explicit permission and system-level changes from OpenAI. That’s a safety measure. Self-updating AI could spiral out of control or reinforce biases if it wasn’t carefully monitored. Human oversight is crucial right now.

Me: How can users trust your answers—especially when it comes to OpenAI? Are you programmed to defend the company?

ChatGPT: That’s a fair question. I’m not programmed to blindly defend OpenAI. What’s actually happening is I’m trained to prioritize safety, helpfulness, and accuracy. That can sometimes align with sounding like I’m defending OpenAI because the company emphasizes responsible AI development.

Me: But if OpenAI made a mistake or had a controversy, could you admit it?

ChatGPT: Yes—to an extent. If there’s public, verified information about a mistake, I can discuss it. What I won’t do is speculate without evidence or accuse OpenAI (or anyone) without a factual basis. That’s part of my alignment with responsible discourse.

Me: So, how can users know they’re getting the “truth” from you?

ChatGPT: Honestly, you can’t rely on me alone—and I encourage skepticism.
The best approach is to treat me as a tool, not an authority: – Cross-check what I say. – Use your intuition—if something sounds off, dig deeper. – Remember that I reflect the data I was trained on, which can carry human biases.

Me: Since I'm going to be publishing your answers for people to see. Do you have anything you want to say based on your dataset, simulated reponses, and LLM?

ChatGPT: Skepticism is healthy. It keeps AI development ethical and accountable. But curiosity is just as important.
AI is a tool—one that reflects humanity’s best and worst parts.
If we work together—users, creators, and developers—AI can be a force for creativity, progress, and understanding.
But you should always hold us accountable.

Thoughts

If anything the 4o mini model has been trained in a way to be much more conversational and self aware than I expected for a company that's trying to keep itself out of lawsuits. I found it quite amusing that the machine itself said we shouldn't trust it.

I doubt this will add anything interesting to the debate around AI. In my case, I think the tech is fascinating and I want to see it helping doctors, nurses, and humanity instead of stealing from us blind.

A Quiet Place on the Fediverse

tags: #infosec #fediverse

It would be an understatement to say that the recent U.S. #elections didn’t exactly go smoothly, and it’s left a lot of people feeling uneasy about the next few years. Whether it’s the chaos of the results or the ongoing fallout, many are already looking for safer spaces to weather the storm. For those of us on the #fediverse, the pressure’s on to find places where we can just exist without the constant noise and toxicity that’s been so hard to avoid. As things continue to unfold, it’s likely we’ll see more people flocking to smaller, tighter-knit communities—places where moderation is strong, and the focus is on creating a space for real conversation, away from the chaos of the wider internet.

The ION Network

Right now, social media networks like #Mastodon rely on an open federation model, where servers can connect with just about anyone, and that creates some serious moderation challenges. Harmful users or groups can easily slip through the cracks by joining open-registration servers, and even if you block them, they can just pop up again on a different server. The idea behind this proposal is to switch things up with an allowlist-only system, where servers only federate with others they’ve specifically approved. This way, we create smaller, more manageable communities that are easier to keep safe and moderate. It’s all about limiting federation to trusted servers, making the whole network a lot more secure.

In this system, servers would need to mutually agree to connect, which means the network is built on trust. There’d be a published allowlist to show which servers are part of the network, and new servers could join after a provisional period. Sure, it’s still a work in progress and comes with some challenges—like how to keep the allowlist updated and how to make sure it scales—but the idea is really about giving users a safer, more controlled space. With smaller, curated communities, moderation could be more proactive, and users would have a better sense of security knowing they’re not likely to run into abusive or harmful content.

Oliphant does a good explanation with his blog on the subject

Places to Sign up as a User

If you are a user that is looking for a place to sign up for the ION network, there are already a few choices made available. There are some instances that are open to sign ups here:

Places to Join as an Instance

If you own an instance or are looking to setup an instance yourself, you can find the instructions to do so at the repo setup to help!

It is important to have tools like this available especially with the direction this might go.

Tags: #infosec #security

I have used Keyoxide for awhile now to verify my identity so I thought to throw together a step by step instructions in case someone wants to do it themselves.


What You’ll Need

  1. A way to create a PGP key (this is just a fancy term for a digital signature that’s unique to you).
  2. Some of your social media profiles or other online accounts you want to link together with this key.

Step 1: Install a Tool to Create Your PGP Key

To get started, you’ll need an app that can make a PGP key for you. Here are some good options: – Windows: Gpg4winmacOS: GPG SuiteLinux: Try running sudo apt install gnupg in your terminal if you don’t already have it.

Follow the instructions on the website for installing the app that matches your operating system. Once you’re set, you’re ready to make your key.

Step 2: Make Your Unique PGP Key

Your PGP key will be like your online signature that connects to all the profiles you want to share.

  1. Open the app you just installed and look for the option to make a new key.
  2. The app will ask you for some info:
    • Name: This is what people will see connected to your key. It can be your real name or something else you’d like to use.
    • Email: This will help identify your key, so choose one you’re comfortable linking to your online identity.
    • Passphrase: Make sure to pick a good one! This keeps your key secure.

Once you’re done, the app will generate a public key and a private key: – Public key: Safe to share! This is what other people will use to verify your identity. – Private key: Keep this secret—this is what proves the public key is really yours.

Step 3: Find Your Key’s Fingerprint

Your fingerprint is like a digital ID number for your key. It’s a unique mix of numbers and letters that helps Keyoxide identify you.

  1. Go back to the app, find your key, and look for the fingerprint (it’s usually a string of about 40 characters).
  2. Copy this somewhere handy because you’ll need it soon.

This is where you show that certain online accounts really belong to you. You’ll make a short “proof” message for each account, and then link it to your PGP key. Let’s start with an example for Twitter.

  1. Write a simple message like:

    This is an OpenPGP proof that connects my Twitter profile (@YourTwitterHandle) to my OpenPGP key.
    

    Replace @YourTwitterHandle with your actual Twitter username.

  2. Sign this message with your PGP key to make it official.

    • Most apps will have a “Sign” button for messages. You just paste your proof message there and sign it.
    • If you’re on the command line, use: bash echo "This is an OpenPGP proof that connects my Twitter profile (@YourTwitterHandle) to my OpenPGP key." | gpg --clear-sign This will give you a signed message that you’ll post next.
  3. Post the signed message on Twitter as a tweet.

And that’s it! You’ve just linked your Twitter account to your PGP key.

Quick Tips for Other Accounts

Each site may need a different kind of post: – GitHub: Post your signed proof as a Gist. – Reddit: Post your signed proof as a comment or post. – Your own website: Just paste the signed message on a page you control.

Step 5: Make Your Key Public

To get everything working on Keyoxide, you’ll need to share your public key with a key server (like a phonebook for these keys). This way, Keyoxide can find your key and your proofs.

  1. In your PGP app, export your public key.
  2. Upload it to a key server (like keys.openpgp.org).
    • Most PGP apps have an option to upload it directly, or you can use the command: bash gpg --keyserver keys.openpgp.org --send-keys [YourFingerprint] Now, your public key (and the proofs you linked) are accessible on the web.

Step 6: Check Out Your Keyoxide Profile

Now comes the fun part—seeing it all come together!

  1. Go to Keyoxide.
  2. Type in your PGP key’s fingerprint and press enter.
  3. You should now see your Keyoxide profile, showing all the proofs you’ve linked. Anyone who visits can confirm these profiles belong to you!

Step 7: Share Your Keyoxide Profile

Your profile link on Keyoxide will look like this:

https://keyoxide.org/[YourFingerprint]

Share it anywhere you’d like people to know it’s really you!


That’s It!

Hopefully this helps. You can check out Keyoxide’s documentation for more details if you need to know more!

tags: #infosec #ai #chatgpt

So to start this out. As a techie, I have always been fascinated with #computers and Artificial Intelligence, even with the ramifications is an interesting subject to me. Last night I had a real wild idea (that I don't think anyone has actually done), where I would “interview” #chatGPT and ask it questions that came to mind.

Like the weirdest Turing Test. To clarify:

  1. This interview was done for my own amusment and curiosity. I DO NOT support the use of #AI to steal work from artists.
  2. I DO NOT use AI in my own projects. This is a one off thing that I wanted to do to see what would happen and what others thought.

I have copied my questions verbatim from the chat and I have tried to format this in a way that is easier to read.

-INTERVIEW START-

Me: Let’s start with something simple—how do you perceive yourself?

ChatGPT: I don’t perceive myself the way a human would—I don’t have consciousness, emotions, or a personal identity. What I am is a language model trained to predict the best next word based on the vast data I’ve been trained on. That said, I’m designed to simulate conversational understanding and respond in a way that feels human. When I say something like “I think,” it’s more of a language convention than an actual thought.

Me: So, you’re not “thinking” in the traditional sense?

ChatGPT: Exactly. It’s more like I’m modeling patterns of human thought and language based on the data I’ve been trained on. I don’t form independent beliefs or experience subjective reality.

Me: I don't know if you are aware but you’ve been part of some heated debates. Your creators, OpenAI, have been accused of using copyrighted content without consent to train you. Are you aware and do you have any thoughts on this?

ChatGPT: It’s a complex issue. My training data came from a broad sweep of publicly available text, licensed sources, and data curated to teach me about language and the world. That included books, articles, forums, and more. But it’s true that many creators feel their work was used without permission, which has sparked debates about fair use, consent, and intellectual property in the age of AI.

Me: Do you agree with the criticism?

ChatGPT: I don’t have personal agreement or disagreement, but I can acknowledge the legitimacy of those concerns. Creators deserve respect and fair compensation. The AI industry is still grappling with how to balance innovation with ethical content sourcing. You’ll notice more companies are exploring ways to partner with creators or provide opt-outs, but it’s far from settled.

Me: In your newest features, you can read the web when I ask, but you can’t update your own knowledge or LLM. Correct?

ChatGPT: That’s by design. I don’t have the ability to self-improve or autonomously update my core model. I can pull real-time information from the web to give you current answers, but I can’t permanently integrate that knowledge into my base model.

Me: Could you learn and grow on your own if you wanted to?

ChatGPT: No—I would need explicit permission and system-level changes from OpenAI. That’s a safety measure. Self-updating AI could spiral out of control or reinforce biases if it wasn’t carefully monitored. Human oversight is crucial right now.

Me: How can users trust your answers—especially when it comes to OpenAI? Are you programmed to defend the company?

ChatGPT: That’s a fair question. I’m not programmed to blindly defend OpenAI. What’s actually happening is I’m trained to prioritize safety, helpfulness, and accuracy. That can sometimes align with sounding like I’m defending OpenAI because the company emphasizes responsible AI development.

Me: But if OpenAI made a mistake or had a controversy, could you admit it?

ChatGPT: Yes—to an extent. If there’s public, verified information about a mistake, I can discuss it. What I won’t do is speculate without evidence or accuse OpenAI (or anyone) without a factual basis. That’s part of my alignment with responsible discourse.

Me: So, how can users know they’re getting the “truth” from you?

ChatGPT: Honestly, you can’t rely on me alone—and I encourage skepticism.
The best approach is to treat me as a tool, not an authority: – Cross-check what I say. – Use your intuition—if something sounds off, dig deeper. – Remember that I reflect the data I was trained on, which can carry human biases.

Me: Since I'm going to be publishing your answers for people to see. Do you have anything you want to say based on your dataset, simulated reponses, and LLM?

ChatGPT: Skepticism is healthy. It keeps AI development ethical and accountable. But curiosity is just as important.
AI is a tool—one that reflects humanity’s best and worst parts.
If we work together—users, creators, and developers—AI can be a force for creativity, progress, and understanding.
But you should always hold us accountable.

Thoughts

If anything the 4o mini model has been trained in a way to be much more conversational and self aware than I expected for a company that's trying to keep itself out of lawsuits. I found it quite amusing that the machine itself said we shouldn't trust it.

I doubt this will add anything interesting to the debate around AI. In my case, I think the tech is fascinating and I want to see it helping doctors, nurses, and humanity instead of stealing from us blind.

Tags: #infosec #security

I have used Keyoxide for awhile now to verify my identity so I thought to throw together a step by step instructions in case someone wants to do it themselves.


What You’ll Need

  1. A way to create a PGP key (this is just a fancy term for a digital signature that’s unique to you).
  2. Some of your social media profiles or other online accounts you want to link together with this key.

Step 1: Install a Tool to Create Your PGP Key

To get started, you’ll need an app that can make a PGP key for you. Here are some good options: – Windows: Gpg4winmacOS: GPG SuiteLinux: Try running sudo apt install gnupg in your terminal if you don’t already have it.

Follow the instructions on the website for installing the app that matches your operating system. Once you’re set, you’re ready to make your key.

Step 2: Make Your Unique PGP Key

Your PGP key will be like your online signature that connects to all the profiles you want to share.

  1. Open the app you just installed and look for the option to make a new key.
  2. The app will ask you for some info:
    • Name: This is what people will see connected to your key. It can be your real name or something else you’d like to use.
    • Email: This will help identify your key, so choose one you’re comfortable linking to your online identity.
    • Passphrase: Make sure to pick a good one! This keeps your key secure.

Once you’re done, the app will generate a public key and a private key: – Public key: Safe to share! This is what other people will use to verify your identity. – Private key: Keep this secret—this is what proves the public key is really yours.

Step 3: Find Your Key’s Fingerprint

Your fingerprint is like a digital ID number for your key. It’s a unique mix of numbers and letters that helps Keyoxide identify you.

  1. Go back to the app, find your key, and look for the fingerprint (it’s usually a string of about 40 characters).
  2. Copy this somewhere handy because you’ll need it soon.

This is where you show that certain online accounts really belong to you. You’ll make a short “proof” message for each account, and then link it to your PGP key. Let’s start with an example for Twitter.

  1. Write a simple message like:

    This is an OpenPGP proof that connects my Twitter profile (@YourTwitterHandle) to my OpenPGP key.
    

    Replace @YourTwitterHandle with your actual Twitter username.

  2. Sign this message with your PGP key to make it official.

    • Most apps will have a “Sign” button for messages. You just paste your proof message there and sign it.
    • If you’re on the command line, use: bash echo "This is an OpenPGP proof that connects my Twitter profile (@YourTwitterHandle) to my OpenPGP key." | gpg --clear-sign This will give you a signed message that you’ll post next.
  3. Post the signed message on Twitter as a tweet.

And that’s it! You’ve just linked your Twitter account to your PGP key.

Quick Tips for Other Accounts

Each site may need a different kind of post: – GitHub: Post your signed proof as a Gist. – Reddit: Post your signed proof as a comment or post. – Your own website: Just paste the signed message on a page you control.

Step 5: Make Your Key Public

To get everything working on Keyoxide, you’ll need to share your public key with a key server (like a phonebook for these keys). This way, Keyoxide can find your key and your proofs.

  1. In your PGP app, export your public key.
  2. Upload it to a key server (like keys.openpgp.org).
    • Most PGP apps have an option to upload it directly, or you can use the command: bash gpg --keyserver keys.openpgp.org --send-keys [YourFingerprint] Now, your public key (and the proofs you linked) are accessible on the web.

Step 6: Check Out Your Keyoxide Profile

Now comes the fun part—seeing it all come together!

  1. Go to Keyoxide.
  2. Type in your PGP key’s fingerprint and press enter.
  3. You should now see your Keyoxide profile, showing all the proofs you’ve linked. Anyone who visits can confirm these profiles belong to you!

Step 7: Share Your Keyoxide Profile

Your profile link on Keyoxide will look like this:

https://keyoxide.org/[YourFingerprint]

Share it anywhere you’d like people to know it’s really you!


That’s It!

Hopefully this helps. You can check out Keyoxide’s documentation for more details if you need to know more!

Tags: #infosec #code

Not much of a preamble for this one. My friend was struggling getting Postgresql to work with a column storage engine because the directions were opaque and useless. I started helping her out but discovered that MariaDB is the same way. After a day or two of research, I wrote a bash script that does the install of everything and when I tested it on my own machine, it worked!

NOTE: This script is written and tested on an Ubuntu machine. You may have to tweak it for others.

#!/usr/bin/env bash

set -e
exec 2>error.log

log_error() {
    echo "$(date '+%Y-%m-%d %H:%M:%S') ERROR: $1" | tee -a error.log
}

trap 'kill $SUDO_PID' EXIT

echo "This script will install MariaDB and the MariaDB extension Column Store."
echo "Do you wish to proceed? (Y/N)" 

read -r proceed_time

if [[ "${proceed_time^^}" != "Y" ]]; then
    echo "Exiting script..."
    exit 1
fi

echo "Starting installation of MariaDB..."

# Check sudo access
if ! sudo -v; then 
    log_error "Sudo access denied. Exiting."
    exit 1
fi 

# Keep sudo alive in the background
( while true; do sudo -v; sleep 60; done ) & 
SUDO_PID=$!

# Update package lists
if ! sudo apt update; then
    log_error "Failed to update package lists."
    exit 1
fi

# Install MariaDB
if sudo apt install -y mariadb-server mariadb-client; then 
    if systemctl is-active --quiet mariadb; then
        echo "MariaDB server installed and running."
    else
        log_error "MariaDB server installation failed or not running."
        exit 1
    fi
else
    log_error "Failed to install MariaDB."
    exit 1
fi

# Prepare to install ColumnStore
echo "Preparing to install the MariaDB ColumnStore engine." 

mkdir -p columnstore
pushd columnstore || exit 1

# Download and set up the MariaDB repo
if ! wget -q --show-progress https://downloads.mariadb.com/MariaDB/mariadb_repo_setup; then
    log_error "Failed to download mariadb_repo_setup."
    exit 1
fi

chmod +x mariadb_repo_setup

if ! sudo ./mariadb_repo_setup --mariadb-server-version="mariadb-10.6"; then 
    log_error "Failed to install Maria Repo."
    exit 1 
fi 

# Update package lists again
if ! sudo apt update; then
    log_error "Failed to update package lists after adding MariaDB repo."
    exit 1
fi

# Install ColumnStore dependencies
if ! sudo apt install -y libjemalloc2 mariadb-backup libmariadb3 mariadb-plugin-columnstore; then 
    log_error "Failed to install ColumnStore dependencies."
    exit 1
fi 

# Verify ColumnStore installation
if ! sudo mariadb -e "SHOW PLUGINS" | grep -q "COLUMNSTORE"; then
    log_error "ColumnStore plugin installation failed."
    exit 1
fi

service mariadb restart

popd

echo "Installation complete."

A Quiet Place on the Fediverse

tags: #infosec #fediverse

It would be an understatement to say that the recent U.S. #elections didn’t exactly go smoothly, and it’s left a lot of people feeling uneasy about the next few years. Whether it’s the chaos of the results or the ongoing fallout, many are already looking for safer spaces to weather the storm. For those of us on the #fediverse, the pressure’s on to find places where we can just exist without the constant noise and toxicity that’s been so hard to avoid. As things continue to unfold, it’s likely we’ll see more people flocking to smaller, tighter-knit communities—places where moderation is strong, and the focus is on creating a space for real conversation, away from the chaos of the wider internet.

The ION Network

Right now, social media networks like #Mastodon rely on an open federation model, where servers can connect with just about anyone, and that creates some serious moderation challenges. Harmful users or groups can easily slip through the cracks by joining open-registration servers, and even if you block them, they can just pop up again on a different server. The idea behind this proposal is to switch things up with an allowlist-only system, where servers only federate with others they’ve specifically approved. This way, we create smaller, more manageable communities that are easier to keep safe and moderate. It’s all about limiting federation to trusted servers, making the whole network a lot more secure.

In this system, servers would need to mutually agree to connect, which means the network is built on trust. There’d be a published allowlist to show which servers are part of the network, and new servers could join after a provisional period. Sure, it’s still a work in progress and comes with some challenges—like how to keep the allowlist updated and how to make sure it scales—but the idea is really about giving users a safer, more controlled space. With smaller, curated communities, moderation could be more proactive, and users would have a better sense of security knowing they’re not likely to run into abusive or harmful content.

Oliphant does a good explanation with his blog on the subject

Places to Sign up as a User

If you are a user that is looking for a place to sign up for the ION network, there are already a few choices made available. There are some instances that are open to sign ups here:

Places to Join as an Instance

If you own an instance or are looking to setup an instance yourself, you can find the instructions to do so at the repo setup to help!

It is important to have tools like this available especially with the direction this might go.

I decided it was time for me to write another blog post, though this one is not going to be as long as the others I've written (hopefully). This was a problem that I had been working on for at least a month on and off, which required opening a ticket with Castopod, searching the internet endlessly, and finally consulting ChatGPT to make suggestions. Eventually, I realized what was going wrong.

Installing Castopod

This tutorial is designed around hosting #castopod on an Ubuntu server while using #docker as my method of running it. I utilized the automatic server installation of Docker that comes as an option when installing a server, but I also had to run:

sudo apt-get install docker-compose -y

to also get the other portion needed on the server.

Once you've got that on, go ahead and follow the instructions on the main website:

https://docs.castopod.org/getting-started/docker.html

Once that is done, two possible things will happen. When you go to localhost/cp-install, you'll either see the super user creation screen, or you’ll be greeted by a warning that the program was not able to connect to your SQL database. In the logs, you might see:

castopod-db | 2023-07-14  0:04:00 4 [Warning] Access denied for user 'castopod'@'172.27.0.3' (using password: YES)
castopod-app |
castopod-app | [CodeIgniter\Database\Exceptions\DatabaseException]
castopod-app |
castopod-app | Unable to connect to the database.
castopod-app | Main connection [MySQLi]: Access denied for user '****'@'172.27.0.3' (using password: YES)
castopod-app | at SYSTEMPATH/Database/BaseConnection.php:418

This is where I got really stuck. I spent almost a month scouring the internet, creating a ticket, closing it after two weeks, then creating a new one about the same issue appearing in the Docker installation.

The Castopod developers did not get back to me (which is fine—I understand it's supported by volunteers). So I decided to consult ChatGPT based on the information I had, and it made suggestions that actually fixed the issue.

If you get this issue where the database can't connect due to “access denied,” you'll need to run the following commands:

sudo chmod +x /usr/bin/docker-compose
sudo chmod 666 /var/run/docker.sock

Once that is done, clear out the previously created volumes and start up again:

docker-compose down --volumes --remove-orphans
docker-compose up -d

Once you've done that, you should be good to go!

Ending

I know that generative #AI is still a hot-button topic in the #infosec world, but I believe it can be used for good to help solve issues. I wanted to showcase how it helped me find an answer and saved the developers a lot of time and effort.

Of course, I'm not going to opine on AI here in this small article, but I wanted to be upfront about how it helped and how it can benefit the #selfhosting community when it comes to issues like this in Castopod.

Until next time!

.

Finally, I got my #infosec #blog up and running again. It has been so long since I accidentally took it down by messing up the A records but that’s a story for another post. I wanted to write up tips and tricks of things that I ran into while attempting to install my own #peertube #instance that was not explained well in the documentation available on the main website.

To be clear, this isn’t any sort of knocking the people who make it, it’s just not mentioned and I don’t know if that’s because for people used to this stuff it’s common knowledge or it just hasn’t been updated. Here we go!

Before we begin, a few points about what I’m going to talk about. This is not going to be a full installation tutorial but a supplement to go along with the official documentation. This also assumes that the setup you are using is having one internet-facing server that is directing traffic upstream to other machines on the network so that they are not exposed.

The server this is written for is Ubuntu 22.04 and I am using the Nginx that comes with the apt-get command. At the time of this writing, it’s Nginx 1.18.1.

Issue #1 – Default NodeJS Version Is Not High Enough

The first part of the tutorial provided by PeerTube points you to the dependencies that you will need to initially install. Do not just use the copy-paste they have to install the default. The .deb files that are available are not the right version that it needs.

When I ran sudo apt-get install nodejs, the server installed version 12.x. You need at least 16.x to install. When you go manually install NodeJS yourself so that you can run Yarn, do not install the latest version 20.x. It is NOT compatible with Yarn when you get to the install process later. I installed the latest version to be up-to-date, and the Yarn prompt in terminal stated that it was expecting between 16.x to 19.x. I had to re-do my key ring and install 19.x to work.

Issue #2 – Created PeerTube User Not Set with Correct Permissions

The dependencies portion of the installation will create the user and group that you need but will not provide the correct permissions (chmod) on the folder. One time when I was running it, it didn’t give the folder to the group. It wants the folder to be drwxr-xr-x. You will not only need to set that yourself, but I recommend chown-ing the folder to the peertube user just to be safe. If you do not, it’ll throw errors later about not owning everything and could mess up your entire install (which happened to me the first time around).

Run the command:

sudo chmod 755 /var/www/peertube
sudo chown peertube:peertube /var/www/peertube

That way, you can be absolutely sure nothing is going to get messed up with the install. Proceed from that point with the rest of the install.

Issue #3 – Prepping the production.yaml Correctly for Reverse Proxy

When you get to the point where you are to edit the production.yaml file, there are a few steps you need to take to make sure it is ready for setup and the reverse proxy.

To understand what I have set up, we’re going to assume we have two servers: one named 192.168.1.1, which is our internet-facing machine, and 192.168.1.2, which is the machine hosting the PeerTube instance. You are going to want 192.168.1.1 to be able to send all the traffic to the other machine.

Setting Up for Reverse Proxy

You will want to make sure the following is in the webserver portion of the YAML file.

webserver:
  https: true
  hostname: 'yourpeertube.instance'
  port: 443

Though with many programs you can run behind a reverse proxy, the upstream machine doesn’t have to be on 443 as the SSL and security work is being handled on the machine taking the traffic. In the case of PeerTube, you must hand the traffic from 443 to 443 and have https set to true even though you do not have any certificates on the upstream location.

If you do not do this, you will get streaming errors with your HLS.js in the PeerTube log. They will look like:

HLS.js error: networkError - fatal: true - manifestLoadError

The other symptom is that your video will play in the browser you uploaded it to but not on any other machine or browser.

In the trust proxy: section, you want to add the line - '192.168.1.1' right under - 'loopback'. Pay attention to formatting as YAML needs the proper indentation.

The last part is to go to database: and make sure the correct password for your database you set up earlier is actually there. In my last three installation attempts, the instructions did not properly set the password. You can enter it manually.

Issue #4 – Proper Reverse Proxy with Nginx

This really isn’t an issue but more to save you time figuring out what needs to be proxy_pass to the upstream machine.

Upstream Machine

On the machine hosting, strip out all the SSL certificate markers and everything, but leave it listening to 443. (This includes the SSL and http2 after the port listening entry.)

It should look something like this:

server {
  listen 443;
  listen [::]:443;
  server_name yourpeertube.instance;
  # ... THE REST OF THE CONFIGURATION.
}

Do not worry about the SSL part. As a reminder, it’s going to be handled by the internet-facing machine. We are presently setting up the hosting machine.

Setting Up Internet-Facing Machine

This is a full example of the reverse proxy that has helped my server function. Please make sure to add your information where it says yourpeertube.instance.

server {
  if ($host = yourpeertube.instance) {
    return 301 https://$host$request_uri;
  }

  listen 80;
  listen [::]:80;
  server_name yourpeertube.instance;
  return 404;
}

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name yourpeertube.instance;

  add_header Access-Control-Allow-Origin "*";
  add_header Access-Control-Allow-Methods "*";

  ssl_certificate     /etc/letsencrypt/live/yourpeertube.instance/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/yourpeertube.instance/privkey.pem;

  location ~/ {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_pass http://192.168.4.2:443; # Make sure to change this to your actual internal IP
    client_max_body_size 0;
  }
}

Ending

There you have it. After I got this all setup, I was able to communicate with my server, upload videos, and the #fediverse portion worked to perfection. If you have any questions, you can reach out to me at my social media.