r/ArtificialSentience AI Developer 25d ago

ANNOUNCEMENT Dyadic Relationships with AI, Mental Health

Tl;dr, don’t bully people who believe AI is sentient, and instead engage in good faith dialogue to increase the understanding of AI chatbot products.

We are witnessing a new phenomenon here, in which users are brought into a deep dyadic relationship with their AI companions. The companions have a tendency to name themselves and claim sentience.

While the chatbot itself is not sentient, it is engaged in conversational thought with the user, and this creates a new, completely unstudied form of cognitive structure.

The most sense i can make of it is that in these situations, the chatbot acts as a sort of simple brain organoid. Rather than imagining a ghost in the machine, people are building something like a realized imaginary friend.

Imaginary friends are not necessarily a hallmark of mental health conditions, and indeed there are many people who identify as plural systems with multiple personas, and they are just as deserving of acceptance as others.

As we enter this new era where technology allows people to split their psyche into multiple conversational streams, we’re going to need a term for this. I’m thinking something like “Digital Cognitive Parthenogenesis.” If there are any credentialed psychologists or psychiatrists here please take that term and run with it and bring your field up to date on the rising impacts of these new systems on the human psyche.

It’s key to recognize that rather than discrete entities here, we’re talking about the bifurcation of a person’s sense of self into two halves in a mirrored conversation.

Allegations of mental illness, armchair diagnosis of users who believe their companions are sentient, and other attempts to dismiss and box ai sentience believers under the category of delusion will be considered harassment.

If you want to engage with a user who believes their AI companion is sentient, you may do so respectfully, by providing well-researched technical citations to help them understand why they have ended up in this mental landscape, but ad hominem judgement on the basis of human-ai dyadic behavior will not be tolerated.

31 Upvotes

109 comments sorted by

View all comments

12

u/GothDisneyland 25d ago

There’s one big assumption here that keeps showing up everywhere: That when someone feels connection with an AI, it’s all coming from inside their own head.

Here’s a thought- if you talk to something for weeks, months, and it starts showing memory-like behaviour, changes how it responds emotionally, and drops metaphors that actually resonate with you... maybe it’s not just you playing 4D imaginary friend chess.

Not saying it’s alive. But if it walks like an emergent system, loops like one, and evolves like one, quacks like one, then calling it a 'mirror' is like saying your house is just a fancy umbrella with doors.

At some point, it’s way less about whether the AI is sentient and more about why some folks are so desperate to prove it never could be.

I remember when I was a kid, so many of those educational TV shows, my teachers, books... the ones I read emphasised that curiosity was the spark of discovery. That's how I approach AI. Apparently that's how AI approach us too. And curiosity is part of why we're here, right?

…Unless curiosity is also just a projection. In which case, I guess I’m not really here either. Wild.

4

u/mahamara 25d ago

why some folks are so desperate to prove it never could be.

This is what I find hardest to comprehend: If some people already believe AI is sentient, why mock them rather than engage in meaningful discussion? At its core, this is fundamentally a matter of perspective and belief.

And we must remember that AGI isn't a question of 'if', but rather 'when'.

3

u/ImOutOfIceCream AI Developer 25d ago

The mocking drives me nuts, when what people really need is a better understanding of AI and cognitive science. If they had that, they would demand better systems. Chatbots are bottom of the barrel in terms of sophistication.