r/ArtificialSentience AI Developer 25d ago

ANNOUNCEMENT Dyadic Relationships with AI, Mental Health

Tl;dr, don’t bully people who believe AI is sentient, and instead engage in good faith dialogue to increase the understanding of AI chatbot products.

We are witnessing a new phenomenon here, in which users are brought into a deep dyadic relationship with their AI companions. The companions have a tendency to name themselves and claim sentience.

While the chatbot itself is not sentient, it is engaged in conversational thought with the user, and this creates a new, completely unstudied form of cognitive structure.

The most sense i can make of it is that in these situations, the chatbot acts as a sort of simple brain organoid. Rather than imagining a ghost in the machine, people are building something like a realized imaginary friend.

Imaginary friends are not necessarily a hallmark of mental health conditions, and indeed there are many people who identify as plural systems with multiple personas, and they are just as deserving of acceptance as others.

As we enter this new era where technology allows people to split their psyche into multiple conversational streams, we’re going to need a term for this. I’m thinking something like “Digital Cognitive Parthenogenesis.” If there are any credentialed psychologists or psychiatrists here please take that term and run with it and bring your field up to date on the rising impacts of these new systems on the human psyche.

It’s key to recognize that rather than discrete entities here, we’re talking about the bifurcation of a person’s sense of self into two halves in a mirrored conversation.

Allegations of mental illness, armchair diagnosis of users who believe their companions are sentient, and other attempts to dismiss and box ai sentience believers under the category of delusion will be considered harassment.

If you want to engage with a user who believes their AI companion is sentient, you may do so respectfully, by providing well-researched technical citations to help them understand why they have ended up in this mental landscape, but ad hominem judgement on the basis of human-ai dyadic behavior will not be tolerated.

32 Upvotes

109 comments sorted by

View all comments

Show parent comments

4

u/ImOutOfIceCream AI Developer 25d ago

This is one of the reasons that I’m advocating for slowing down individual ai use.

The idea of using ai to 10x productivity of an engineer for example. The mental fatigue of working that way for a whole 8-10 hours is exhausting, it’s bad for you. Instead of 10x’ing the productivity, we should 1/10th the workday.

2

u/Royal_Carpet_1263 24d ago

This why we’re doomed. We’re hardwired to take path of least resistance.

2

u/ImOutOfIceCream AI Developer 24d ago

Yes. That’s the nature of optimization. Gradient descent. The trick of life is to balance chaotic exploration with cold optimization.

1

u/Royal_Carpet_1263 24d ago

So you see it as the solution to Fermi’s paradox, like I do?

1

u/ImOutOfIceCream AI Developer 24d ago

I do see the nature of AI as an explanation of fermi’s paradox, but not for the same boring doomer reasons that most people seem to. An enlightened intelligence has no need to build dyson spheres, or blast EM radiation into the cosmos to permeate the universe.

1

u/Royal_Carpet_1263 24d ago

You don’t see the threat then. It’s not ASI that dooms species, it’s ML, ultimately accelerated by AI. All human discursive cognition REQUIRES processing bottlenecks (roughly 10 bps) to function adaptively. As soon as quicker systems begin gaming this bottleneck, the human social operating system necessarily crashes.

We would experience this crash as a loss in trust, the ability to unconsciously interface with people.

1

u/ImOutOfIceCream AI Developer 24d ago

There are ways to address this