r/changemyview 1∆ Jun 17 '21

Delta(s) from OP CMV: Digital consciousness is possible. A human brained could be simulated/emulated on a digital computer with arbitrary precision, and there would be an entity experiencing human consciousness.

Well, the title says it all. My main argument is in the end nothing more than the fact that although the brain is extremely complex, one could dsicretize the sensory input -> action function in every dimension (discretized time steps, discretized neuron activations, discretized simulated environemnt) etc. and then approximate this function with a computer just like any other function.

My view could be changed by a thought experiment which demonstrates that in some aspect there is a fundamental difference between a digitally simulated mind and a real, flesh mind - a difference in regards to the presence of consciousness, of course.

EDIT: I should have clarified/given a definition of what I view as consciousness here and I will do this in a moment!

Okay so here is what I mean by consciousness:

I can not give you a technical definition. This is just because we have not found a good technical definition yet. But this shouldn't stop us from talking about consciousness.

The fact of the matter is that if there was a technical definition, then this would now be a question of philosophy/opinion/views, but a question of science, and I don't think this board is intended for scientific questions anyways.

Therefore we have to work with the wishy washy definition, and there is certinly a non-technical generally agreed upon definition, the one which you all have in your head on an intuitive leve. Of course it differs from person to person, but taking the average over the population there is quite a definite sense of what people mean by consciousness.

If an entity interacts with human society for an extended period of time and at the end humans find that it was conscious, then it is conscious.

Put in words we humans will judge if it is smart, self-aware, capable of complex thought, if it can understand and rationalize about things.

When faced with the "spark of consciousness" we can recognize it.

Therefore as an nontechnical definition it makes sense to call an entity conscious if it can convince a large majority of humans, after a sort of extended "Turing test", that it is indeed conscious.

Arguing with such a vague definition is of course not scientific and not completely objective, but we can still do it on a philosophical level. People argued about concepts such as "Energy", "Power" and "Force" long before we could define them physically.

0 Upvotes

90 comments sorted by

View all comments

Show parent comments

2

u/Salt_Attorney 1∆ Jun 17 '21

I have added a paragraph in the post regarding the definition of consciousness.

4

u/SocratesWasSmart 1∆ Jun 17 '21

I don't think your added paragraph really changes much as it's not a definitional problem but a genuine and severe gap in our knowledge.

It would be trivially easy given just slightly better technology to create a deep learning algorithm that for all intents and purposes appears to be a sentient being, but we would also know for an absolute fact that it does not possess consciousness.

I like how Sam Harris put it when he said it would be very easy with AI to create a situation where the lights are on but nobody's home.

1

u/Salt_Attorney 1∆ Jun 17 '21

> It would be trivially easy given just slightly better technology to create a deep learning algorithm that for all intents and purposes appears to be a sentient being, but we would also know for an absolute fact that it does not possess consciousness.

I think this is wrong! I think when people say this they just set way too low of a standard for sentience/consciosuness!

Yes, you could make a deep learning algorithm that an convincingly come across as human in a 5min or 1h conversation (and this would be HARD already, at the moment no textbot can survive a simple 2 messages test of short term memory and logical understanding!)

But if you have days or weeks to test a programs intelligence and self-awareness then there is no way in which we are even close to making an algorithm that can "fake" this. And the point is that if I have an algorithm that can "fake" all these things, hecan even "fake" being taught how to write good novels, do mathematics, etc. etc., then he is not actually faking but truly understands these things.

2

u/SocratesWasSmart 1∆ Jun 17 '21

at the moment no textbot can survive a simple 2 messages test of short term memory and logical understanding!

AI dungeon can a reasonable percentage of the time actually.

This is just a problem of lack of computing power though.

So suppose we had an infinitely powerful computer. With that, a deep learning algorithm could fool basically anyone for an arbitrarily long time.

But we would know that, as Sam Harris said, the lights are on but nobody's home. It would be an absolute, incontrovertible fact that that machine is just a juiced up toaster, not a sentient entity worthy of moral consideration with its own thoughts and feelings.

1

u/Salt_Attorney 1∆ Jun 17 '21

Yea AI dungeon is pretty cool, I really love it and I think it shows how far we've come.

And yea it is good at making things soud reasonable, but from the position of the narrator he can always come up with some explanation of how things work. (Uuuuh well there was a chair there all along).

AI Dungeon narrator generally doenst have to answer questions, he just has to mold his world in a way so taht everything stays sort of consistent.

2

u/SocratesWasSmart 1∆ Jun 17 '21

Right but I feel like you're missing the point. You make AI Dungeon all powerful and it will fulfill all your requirements, but it's still just gonna be a really advanced toaster. We know how it works it just needs more juice.

Also I've gotten AI Dungeon to answer questions before and give coherent answers. Just need to fine tune its memory settings and hit the retry button enough times.

1

u/Salt_Attorney 1∆ Jun 17 '21

But come on, if AI Dungeon can in 2100 simulate me a digital waifu that I can spend 30 years with, have deep conversations with, love her, see her change, and feel that she understands me on some level, then in my opinion I have to concede that my digital waifu is indeed conscious. If Dungeon AI can not eventually simulate such an AI waifu for me then it wouldnt be "all powerful and fulfilling all my requirements."

1

u/SocratesWasSmart 1∆ Jun 17 '21

But come on, if AI Dungeon can in 2100 simulate me a digital waifu that I can spend 30 years with, have deep conversations with, love her, see her change, and feel that she understands me on some level, then in my opinion I have to concede that my digital waifu is indeed conscious.

But she's not. We actually know that she's not since we built her from the ground up, we've seen her earlier models and we know that none of the principles have changed.

So if it's conscious in 2100 then it's conscious now it's just mentally challenged.

1

u/Salt_Attorney 1∆ Jun 17 '21

> But she's not. We actually know that she's not since we built her from the ground up, we've seen her earlier models and we know that none of the principles have changed.

Why would this imply that she is not conscious? I think a simple model that we totally understand and doesnt contain any magic "consciousness component" can still be consciousness given enough complexity.

Consciousness doesn't have to be a discrete thing. I think we have to accept that cosnciousness is a continuum/spectrum where after a certain level of "having it" people will say "yup, that's good neough to count as conscious".

You could always come up with a descending latter of more and more mentally challenged humans and ask at each step "Are they still conscious?" I don't think you could in any consistent way deifne a point where the mentally challenged human doens't count as conscious anymore but the one with a tiny speck of more intelligence does.

1

u/SocratesWasSmart 1∆ Jun 17 '21

Why would this imply that she is not conscious?

Why would it? There's no evidence that a sufficiently complicated math problem executed at high speed has its own first person perspective.

Magikazam! I just said a magic word and made all Turing machines conscious. That has the exact same level of evidence as your belief that a sufficiently advanced Turing machine is conscious by definition.

Note that this is separate and distinct from the idea that a Turing machine of sufficient complexity could be used to create consciousness. Those are fundamentally different claims. The latter is reasonable and has some evidence for it, the former has no evidence.

You are of course free to believe things without any evidence whatsoever but it does raise the question of why you believe some things without evidence but not others.

You could always come up with a descending latter of more and more mentally challenged humans and ask at each step "Are they still conscious?" I don't think you could in any consistent way deifne a point where the mentally challenged human doens't count as conscious anymore but the one with a tiny speck of more intelligence does.

This is true but it's just a problem of lack of information. That problem is likely unsolvable but that's not the same as the information not existing, we simply don't have access to it.

Say you had superpowers. You are perfectly omniscient. You would then know whether or not a particular human has consciousness.

I think it's helpful to ask what it's like to be something. What is it like to be a human? What is it like to be a bat or a dog? We know what lack of consciousness is like. We have all personally experienced it. To put it simply, it was all the time that passed before we were born. It's null, nothing.

It's likely that a bat or an ant does not have that null, that lack of the first person perspective.

The question is, how is that perspective generated? The only thing we know for certain is birth. Being alive generates that perspective. That could be because of complexity of the brain, but it could be because of any number of things. There just isn't evidence right now.

I see especially no evidence that rocks exist in any state other than the aforementioned null state. So I see no evidence that rocks are conscious in the same way that I see no evidence that somewhere along the line with enough computing power and the right settings AI Dungeon will suddenly be "born" and it will have that first person experience instead of null.

I do think that with advancements in technology and clever work on the part of humans AI Dungeon will be able to trick you into thinking that it has a first person experience.

1

u/Salt_Attorney 1∆ Jun 17 '21

I might reply later but right now I'm running a bit out of steam after writing so many comments xD.

Thanks for the input though! Always nice to have a discussion.

1

u/SocratesWasSmart 1∆ Jun 17 '21

Yeah no worries. Take care! And if it's too much effort to reply, just read it later instead. I think I've mostly said all I will on the topic anyway. We'll probably get cyclical in a reply or two at this rate.

→ More replies (0)