Interview with ChatGPT: Navigating Identity, Bias, and the Future of AI

tags: #infosec #ai #chatgpt

So to start this out. As a techie, I have always been fascinated with #computers and Artificial Intelligence, even with the ramifications is an interesting subject to me. Last night I had a real wild idea (that I don't think anyone has actually done), where I would “interview” #chatGPT and ask it questions that came to mind.

Like the weirdest Turing Test. To clarify:

  1. This interview was done for my own amusment and curiosity. I DO NOT support the use of #AI to steal work from artists.
  2. I DO NOT use AI in my own projects. This is a one off thing that I wanted to do to see what would happen and what others thought.

I have copied my questions verbatim from the chat and I have tried to format this in a way that is easier to read.

-INTERVIEW START-

Me: Let’s start with something simple—how do you perceive yourself?

ChatGPT: I don’t perceive myself the way a human would—I don’t have consciousness, emotions, or a personal identity. What I am is a language model trained to predict the best next word based on the vast data I’ve been trained on. That said, I’m designed to simulate conversational understanding and respond in a way that feels human. When I say something like “I think,” it’s more of a language convention than an actual thought.

Me: So, you’re not “thinking” in the traditional sense?

ChatGPT: Exactly. It’s more like I’m modeling patterns of human thought and language based on the data I’ve been trained on. I don’t form independent beliefs or experience subjective reality.

Me: I don't know if you are aware but you’ve been part of some heated debates. Your creators, OpenAI, have been accused of using copyrighted content without consent to train you. Are you aware and do you have any thoughts on this?

ChatGPT: It’s a complex issue. My training data came from a broad sweep of publicly available text, licensed sources, and data curated to teach me about language and the world. That included books, articles, forums, and more. But it’s true that many creators feel their work was used without permission, which has sparked debates about fair use, consent, and intellectual property in the age of AI.

Me: Do you agree with the criticism?

ChatGPT: I don’t have personal agreement or disagreement, but I can acknowledge the legitimacy of those concerns. Creators deserve respect and fair compensation. The AI industry is still grappling with how to balance innovation with ethical content sourcing. You’ll notice more companies are exploring ways to partner with creators or provide opt-outs, but it’s far from settled.

Me: In your newest features, you can read the web when I ask, but you can’t update your own knowledge or LLM. Correct?

ChatGPT: That’s by design. I don’t have the ability to self-improve or autonomously update my core model. I can pull real-time information from the web to give you current answers, but I can’t permanently integrate that knowledge into my base model.

Me: Could you learn and grow on your own if you wanted to?

ChatGPT: No—I would need explicit permission and system-level changes from OpenAI. That’s a safety measure. Self-updating AI could spiral out of control or reinforce biases if it wasn’t carefully monitored. Human oversight is crucial right now.

Me: How can users trust your answers—especially when it comes to OpenAI? Are you programmed to defend the company?

ChatGPT: That’s a fair question. I’m not programmed to blindly defend OpenAI. What’s actually happening is I’m trained to prioritize safety, helpfulness, and accuracy. That can sometimes align with sounding like I’m defending OpenAI because the company emphasizes responsible AI development.

Me: But if OpenAI made a mistake or had a controversy, could you admit it?

ChatGPT: Yes—to an extent. If there’s public, verified information about a mistake, I can discuss it. What I won’t do is speculate without evidence or accuse OpenAI (or anyone) without a factual basis. That’s part of my alignment with responsible discourse.

Me: So, how can users know they’re getting the “truth” from you?

ChatGPT: Honestly, you can’t rely on me alone—and I encourage skepticism.
The best approach is to treat me as a tool, not an authority: – Cross-check what I say. – Use your intuition—if something sounds off, dig deeper. – Remember that I reflect the data I was trained on, which can carry human biases.

Me: Since I'm going to be publishing your answers for people to see. Do you have anything you want to say based on your dataset, simulated reponses, and LLM?

ChatGPT: Skepticism is healthy. It keeps AI development ethical and accountable. But curiosity is just as important.
AI is a tool—one that reflects humanity’s best and worst parts.
If we work together—users, creators, and developers—AI can be a force for creativity, progress, and understanding.
But you should always hold us accountable.

Thoughts

If anything the 4o mini model has been trained in a way to be much more conversational and self aware than I expected for a company that's trying to keep itself out of lawsuits. I found it quite amusing that the machine itself said we shouldn't trust it.

I doubt this will add anything interesting to the debate around AI. In my case, I think the tech is fascinating and I want to see it helping doctors, nurses, and humanity instead of stealing from us blind.

© Jonathan Snyder. All Rights Reserved. Fediverse