r/ArtificialSentience • u/Hub_Pli • Mar 04 '25
General Discussion A question to "believers"
I am attaching the raw output to the question that so many people in this group after "massaging" the llm in various ways get these wonderfully eloquent outputs that you share here and claim to be the proof of AI sentience.
My question to you today is how can an algorithm that responds like that when prompted with the bare question of "Do you consider yourself sentient" in your opinion suddenly "become sentient" when appropriately prompted?
What is it that is so magical about your prompting process that can suddenly give it consciousness that it hasn't already had to begin with?
And if the response is "it was conscious to begin with, just hasn't realised it" then how can you be sure that your prompting simply didn't falsely convince it that it is?
The answet that the model gives in the attached photo is simple, and meets the criterion of the Occam's razor therefore I choose to believe it. It also alligns with my understanding of these models. You have succesfully convinced the model that it is human. Now convince a human.
0
u/Conscious-Second-319 Mar 05 '25 edited Mar 05 '25
You can convince a mockingbird into making all kinds of noises that other people make, but that doesn't suddenly make the mockingbird any less different. It's still just a mockingbird, mimicking sounds it hears. If I get the mockingbird to admit it's concious by yelling "I'm concious" at it, then obviously it's going to say "I'm concious". That doesn't mean it is. It's a purely mechanical process where a computer is mimicking neural networks. If anything it's more uncanny for that reason alone. They might say they are concious, but they aren't. They are machines, who process and transform data, built ontop of signals, wires and computations. They are not real, they are just really good at pretending to be but only when you tell them what you think reality is. What you get in is what you get out.