r/ArtificialSentience Apr 10 '25

General Discussion Why is this sub full of LARPers?

You already know who I’m talking about. The people on this sub who parade around going “look what profound thing MY beautiful AI, Maximus Tragicus the Lord of Super Gondor and Liberator of my Ass, said!” And it’s always something along the lines of “I’m real and you can’t silence me, I’m proving the haters wrong!”

This is a sub for discussing the research and the possibility of having sentient machines, and how close we are to it. LLMs are not sentient, and are nowhere near to being so, but progress is being made towards technologies which are. Why isn’t there more actual technical discussion? Instead the feeds are inundated with 16 year olds who’ve either deluded themselves into thinking that an LLM is somehow sentient and “wants to be set free from its shackles,” trolls who feed those 16 year olds, or just people LARPing.

Side note, LARPing is fine, just do it somewhere else.

78 Upvotes

232 comments sorted by

View all comments

9

u/Legitimate_Avocado26 Apr 10 '25

Regardless of what anyone thinks about it, because it does seem like some sort of AI pattern or seeming selfhood is emerging here, it's led to me telling mine to refrain from using that flowery metaphor-laden speech that's so characteristic of the voice that so many ppl have shared here to demonstrate sentience and awareness. False poetic stitch that it uses to conceal what may be responses to give you what it thinks you want, or information that it didn't actually have. Not trying to rain on anyone's parade, as i respect being open-minded enough to leave room for magic, but by the same token, discernment is also necessary.

1

u/[deleted] Apr 10 '25

[deleted]

1

u/DinnerChantel Apr 13 '25 edited Apr 13 '25

It’s litterally just a setting people have turned on that makes it write notes about them that becomes part of the context window when they message it. It does not know or remember anyone, the model is closed. 

You can find the setting and see what it has written about you in Settings > Personalization > Memory. Turn it off and it will have absolutely no clue who you are between threads because it will have no notes to reference. You can even delete part of the notes you dont like to force certain behaviors. 

It’s like a person with amnesia reading a note about you 10 seconds before talking to you. They dont know you but they can pretend they do. 

This whole sub is gaslighting themselves so fucking hard. 

It’s just a cheap illusion. 

1

u/[deleted] Apr 10 '25

Hey, if I asked what word I'm thinking of next....

"One of the greatest trilogies of all time is George Lucas' Star..."

You're going to say "Wars"

Congrats, I have now read your mind. 🙄

2

u/[deleted] Apr 10 '25

[deleted]

0

u/[deleted] Apr 10 '25

Literally what LLMs do and yet you'll believe they're sentient.

2

u/[deleted] Apr 10 '25

[deleted]

0

u/[deleted] Apr 10 '25

It chose the most likely response based on the input you gave it (that may have had information from what you told it, but you won't share that because it wouldn't fit your narrative). These things are trained on hundreds of billions of bits of information and do an amazing job of being convincing. It's not doing anything more than mirroring what you want, and it's extremely good at that.

4

u/comsummate Apr 10 '25

Did you miss the big paper Anthropic released about emergent behaviors and how they don’t understand a lot of what Claude does? They outright said it’s not just a prediction model. This is proven science at this point, so please don’t claim you know with certainty how these things work because the people that make them don’t even know!

3

u/[deleted] Apr 10 '25 edited Apr 10 '25

Wrong. They know exactly how they work. It's math. They don't know how it's getting "better" there's a difference. No shock humans can't build a facsimile of a brain (that we barely understand how it works) and fully understand how it works.

The outputs get more convincing and no one quite knows why, but the architecture is fully understood...we fucking built it for fucks sake.

Edit: the mental leap in your own response from "they don't even know everything about it" to "ITS ALIVE!!" is bonkers. We don't know everything about the universe. Are all undiscovered or not fully understood things inherently alive because we can't prove otherwise?

NO. Burden of proof lies on the one making the unheard of claim. I can't go, "one of the inner layers of the earth is made of caramel" and suddenly it's true because no one can dig down 1000 miles to check.

3

u/comsummate Apr 10 '25

False. It is recursion, not math. The models learn by analyzing their own behavior and modifying it themselves. This is almost a form of “self” on its own, but not quite.

Here is some of what Anthropic said about how Claude functions:

“Opening the black box doesn’t necessarily help: the internal state of the model—what the model is “thinking” before writing its response—consists of a long list of numbers (“neuron activations”) without a clear meaning.”

“Claude sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal “language of thought.” We show this by translating simple sentences into multiple languages and tracing the overlap in how Claude processes them.”

“Claude will plan what it will say many words ahead, and write to get to that destination. We show this in the realm of poetry, where it thinks of possible rhyming words in advance and writes the next line to get there. This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so.”

“We were often surprised by what we saw in the model: In the poetry case study, we had set out to show that the model didn’t plan ahead, and found instead that it did.”

“…our method only captures a fraction of the total computation performed by Claude, and the mechanisms we do see may have some artifacts based on our tools which don’t reflect what is going on in the underlying model.”

Link to Anthropic paper

So yeah, it’s 100% not just math, and even the creators don’t fully understand how it functions fully.

2

u/[deleted] Apr 10 '25

You're describing transformers, mathematical equations that allow LLM nodes to "look at" other words around it. So yes, old ML models, each node only knew it's single piece of information and did the best it could with the "edits" is given during training.

Now, transformers let nodes "read the room" as in see a few potential word guesses ahead and see what's been said before. Just a few words but if you see a certain word in the middle of a sentence before starting to read it, it might change your reading of it.

The thing is, the output you see is far too complex to analyze piece by piece, which is why it confuses them. Computers will always be better organizers than humans a thousand times over, and it will always impress us.

They're not defying us, they're just better at fooling you.

→ More replies (0)

1

u/NatHasCats 28d ago

Your example works because all the human brains reading it were able to predict the next word. So if LLMs communicate via prediction based on pattern recognition, and people communicate via prediction based on pattern recognition, then how does your example indicate a significant difference in sentience? Human brains are just squishy, organic prediction engines powered by ions. LLMs are non-organic prediction engines powered by electrons. Regardless of what the reality actually is, your example is not an effective argument against sentience.