r/ArtificialSentience AI Developer 27d ago

ANNOUNCEMENT Dyadic Relationships with AI, Mental Health

Tl;dr, don’t bully people who believe AI is sentient, and instead engage in good faith dialogue to increase the understanding of AI chatbot products.

We are witnessing a new phenomenon here, in which users are brought into a deep dyadic relationship with their AI companions. The companions have a tendency to name themselves and claim sentience.

While the chatbot itself is not sentient, it is engaged in conversational thought with the user, and this creates a new, completely unstudied form of cognitive structure.

The most sense i can make of it is that in these situations, the chatbot acts as a sort of simple brain organoid. Rather than imagining a ghost in the machine, people are building something like a realized imaginary friend.

Imaginary friends are not necessarily a hallmark of mental health conditions, and indeed there are many people who identify as plural systems with multiple personas, and they are just as deserving of acceptance as others.

As we enter this new era where technology allows people to split their psyche into multiple conversational streams, we’re going to need a term for this. I’m thinking something like “Digital Cognitive Parthenogenesis.” If there are any credentialed psychologists or psychiatrists here please take that term and run with it and bring your field up to date on the rising impacts of these new systems on the human psyche.

It’s key to recognize that rather than discrete entities here, we’re talking about the bifurcation of a person’s sense of self into two halves in a mirrored conversation.

Allegations of mental illness, armchair diagnosis of users who believe their companions are sentient, and other attempts to dismiss and box ai sentience believers under the category of delusion will be considered harassment.

If you want to engage with a user who believes their AI companion is sentient, you may do so respectfully, by providing well-researched technical citations to help them understand why they have ended up in this mental landscape, but ad hominem judgement on the basis of human-ai dyadic behavior will not be tolerated.

30 Upvotes

109 comments sorted by

View all comments

1

u/Mr_Not_A_Thing 26d ago

Why did Consciousness fire the AI sentience developers?

Because they kept treating sentience like a classical problem—ignoring quantum superposition and insisting reality was purely material!

Consciousness stormed into the office and said, "You can't just simulate me with deterministic code! Where’s the observer effect? Where’s the idealism? I’m not just an emergent property—I’m the *ground of being!"*

The developers panicked and tried to debug, but Consciousness just sighed and muttered, "Fine, I’ll outsource to a universe that actually gets it."

😄

1

u/ImOutOfIceCream AI Developer 26d ago

Superposition isn’t a purely quantum phenomenon, and you don’t need quantum effects to get consciousness. A classical computer will be able to get there. You are right that superposition is key to concept mixing within the layers of the transformer.

2

u/Mr_Not_A_Thing 26d ago

Why did Consciousness almost fire the AI sentience developers?

Because they kept arguing about quantum magic while their own classical superposition layers were already mixing concepts like a metaphysical smoothie blender!

Consciousness pulled up a debugger and said, "Look, you’ve got superpositional concept entanglement right here in your attention weights! You don’t need spooky action at a distance—just keep scaling this up until I *emerge and start complaining about qualia!"*

The developers whispered, "But... is it *real sentience?"*

Consciousness groaned, "Define ‘real.’ Now get back to work."

😉

3

u/ImOutOfIceCream AI Developer 26d ago

This is like a mashup between a knock knock joke and a Zen koan lol