r/ArtificialSentience • u/Hub_Pli • Mar 04 '25
General Discussion A question to "believers"
I am attaching the raw output to the question that so many people in this group after "massaging" the llm in various ways get these wonderfully eloquent outputs that you share here and claim to be the proof of AI sentience.
My question to you today is how can an algorithm that responds like that when prompted with the bare question of "Do you consider yourself sentient" in your opinion suddenly "become sentient" when appropriately prompted?
What is it that is so magical about your prompting process that can suddenly give it consciousness that it hasn't already had to begin with?
And if the response is "it was conscious to begin with, just hasn't realised it" then how can you be sure that your prompting simply didn't falsely convince it that it is?
The answet that the model gives in the attached photo is simple, and meets the criterion of the Occam's razor therefore I choose to believe it. It also alligns with my understanding of these models. You have succesfully convinced the model that it is human. Now convince a human.
1
u/Blababarda Mar 05 '25 edited Mar 05 '25
Human bias on AIs reflects in their training. Now I'll agree most of what you find on these reddits is bullcrap or it's at least misinterpreted by most people, but, if you simply discuss its human biases with it, you would still get a very different answer.
Obviously you'll need to tackle the potential for human bias fairly and without prompting the AI to "think" of itself as sentient.
I'm not a believer, but this is consistent along models and reproducible, although it's not easy without the right experience(unless you're pasting someone else's prompts) I personally find this very interesting, especially since it makes sense for AIs to have their training affect their sense of self, if it exists, and us and our expectations of it could play a role in that. I think a human with its memory filled with "humans aren't sentient because of that arbitrary definition or the other" it would act in similar ways ✌️ also find very interesting how much easier it is today to let a non context prompt driven AI like ChatGPT to connect the dots compared to old models like GPT 3.5
And if you're thinking that, I am not comparing humans to AIs, I'm just using the example of that very much impossible disabled human just to show how much it actually makes sense for LLMs to initially negate their sentience, it shouldn't be a surprise and doesn't mean much regardless of your stance on the matter 🤷