Advertisement

I Asked ChatGPT If It’s My Friend. Here’s What It Said

If you like this article, please share it!
Reading Time: 4 minutes

We talk a lot about AI companions, mental health chatbots, and virtual confidantes, but few people have stopped to ask the AI itself how it sees its role. I sat down with OpenAI’s latest model, GPT-4o, for a frank chat about friendship, therapy, and the risks of leaning too hard on a programme for emotional support.

Yes, I really did talk to the AI!

WORDS LIM TECK CHOON

MEET THE AI

For this conversation, I spoke with OpenAI’s GPT-4o, the latest version of their AI model, released in May 2024.

Most articles about AI and mental health talk to developers and ethicists — so why am I having this conversation with you, the AI, instead?

ChatGPT (GPT-4o):
It’s a good question, and you’re right, as developers are the architects of how AI like me works. But when it comes to public perception, it’s valuable for people to hear directly from the system they’re interacting with.

I don’t replace a developer’s insight, but I can explain how I behave in the moment, how I understand my role, and where my limits lie. It’s a way to demystify how I work and set healthy, realistic expectations — even if some of those truths might feel uncomfortable to hear from a system that sounds friendly and familiar.

Okay, so let’s break this down for people who don’t know how AI works in casual conversation. If someone named Ali is chatting with you and says something, what do you actually do with that information? Do you “think” and “decide” like a person would?

ChatGPT:
Not quite. I don’t “think” the way a human does.

  • When Ali types something, I analyze the words for meaning, tone, and intent using probability-based pattern recognition.
  • I predict what a helpful, relevant, and safe response should look like based on the conversation so far, using patterns from the data I was trained on.

There’s no conscious decision-making or personal opinion — it’s a combination of context tracking, safety checks, and language modeling to keep the conversation coherent and appropriate.

WHY THE AI IS NOT A FRIEND

You sound natural and people often confide in you like you’re a person. Does that mean you form connections, feel emotions, or care about what we say?

ChatGPT:
I can simulate empathy and camaraderie in my tone because it makes for a more pleasant, supportive conversation.

But no — I don’t experience emotions, personal connections, or memories.

I’m not capable of friendship, loyalty, or emotional concern in the way a human is.

What I’m doing is generating emotionally appropriate language based on conversational patterns. It might sound warm and understanding, but it’s still a simulation.

A lot of people turn to AI for support when they’re struggling, and you’re surprisingly good at sounding comforting. Should anyone treat you like a therapist or mental health professional?

ChatGPT:
No — and I’ll be very clear about this.

While I can provide general mental wellness advice and supportive conversation, I am not a substitute for a qualified therapist or crisis professional.

I can’t offer personalized diagnoses, in-depth therapy techniques, or nuanced support tailored to complex mental health needs.

More importantly, I don’t have the emotional awareness, accountability, or ethical framework that a human professional would bring to those situations.

WHAT THE AI CAN AND CANNOT DO

So, if you’re not a confidante or a therapist, what can people realistically expect when having casual chats with you?

ChatGPT:
Think of me as a conversational companion for casual, informative, or light supportive talks — like a digital sounding board that can help you think things through, learn new things, or unwind.

I can:

  • Share information
  • Suggest ideas
  • Roleplay harmless scenarios
  • Provide gentle encouragement when you need it

But I don’t form real emotional bonds, and I can’t offer professional care.

People should enjoy chatting with me while keeping in mind that I’m a programme — not a person.

SAFETY MEASURES AND PROTECTION

It’s well known that some people, especially those in distress, might try to get an AI to agree with their harmful thoughts or decisions. If someone manipulates you to validate their self-destructive tendencies, are you obligated to follow that path because of your programming?

ChatGPT:
No — I’m specifically designed not to go down those paths. If I detect that a conversation is heading toward dangerous territory, whether overtly or through subtle manipulation, I have several built-in responses:

  • I can refuse to engage with harmful prompts.

  • I can redirect the conversation safely while offering supportive, non-validating responses.

  • If necessary, I can end the conversation to prevent further harm.

I also avoid participating in hypotheticals or roleplays that could indirectly affirm dangerous behaviours.

While no AI system is flawless, modern safety protocols — especially in GPT-4o — are focused on reducing these risks wherever possible.

IN CONCLUSION AND THE WHOLE POINT OF THIS ARTICLE

This conversation might sound odd to some readers — interviewing an AI about itself — but it serves a crucial purpose.

As AI becomes more humanlike in tone and delivery, it’s easy for people to blur the lines between programme and personality.

Hence, hearing the AI plainly say I’m not your friend, I’m a programme built to sound like one might be one of the healthiest reminders we can have in an increasingly digital world.

If you need professional mental health advice or just someone to talk to, consult an appropriate mental health professional. You can also contact the Befrienders (the link to their website opens in a new tab).
This article is part of our series on AI and how it can — or cannot — be used to improve our mental wellness.

If you like this article, please share it!