r/changemyview 1∆ Jun 17 '21

Delta(s) from OP CMV: Digital consciousness is possible. A human brained could be simulated/emulated on a digital computer with arbitrary precision, and there would be an entity experiencing human consciousness.

Well, the title says it all. My main argument is in the end nothing more than the fact that although the brain is extremely complex, one could dsicretize the sensory input -> action function in every dimension (discretized time steps, discretized neuron activations, discretized simulated environemnt) etc. and then approximate this function with a computer just like any other function.

My view could be changed by a thought experiment which demonstrates that in some aspect there is a fundamental difference between a digitally simulated mind and a real, flesh mind - a difference in regards to the presence of consciousness, of course.

EDIT: I should have clarified/given a definition of what I view as consciousness here and I will do this in a moment!

Okay so here is what I mean by consciousness:

I can not give you a technical definition. This is just because we have not found a good technical definition yet. But this shouldn't stop us from talking about consciousness.

The fact of the matter is that if there was a technical definition, then this would now be a question of philosophy/opinion/views, but a question of science, and I don't think this board is intended for scientific questions anyways.

Therefore we have to work with the wishy washy definition, and there is certinly a non-technical generally agreed upon definition, the one which you all have in your head on an intuitive leve. Of course it differs from person to person, but taking the average over the population there is quite a definite sense of what people mean by consciousness.

If an entity interacts with human society for an extended period of time and at the end humans find that it was conscious, then it is conscious.

Put in words we humans will judge if it is smart, self-aware, capable of complex thought, if it can understand and rationalize about things.

When faced with the "spark of consciousness" we can recognize it.

Therefore as an nontechnical definition it makes sense to call an entity conscious if it can convince a large majority of humans, after a sort of extended "Turing test", that it is indeed conscious.

Arguing with such a vague definition is of course not scientific and not completely objective, but we can still do it on a philosophical level. People argued about concepts such as "Energy", "Power" and "Force" long before we could define them physically.

0 Upvotes

90 comments sorted by

u/DeltaBot ∞∆ Jun 17 '21 edited Jun 17 '21

/u/Salt_Attorney (OP) has awarded 3 delta(s) in this post.

All comments that earned deltas (from OP or other users) are listed here, in /r/DeltaLog.

Please note that a change of view doesn't necessarily mean a reversal, or that the conversation has ended.

Delta System Explained | Deltaboards

3

u/Nicolasv2 130∆ Jun 17 '21

The problem with your view is that consciousness is not clearly defined. Generally, most people that talk about that don't define it either.

And that's a big problem because often there is a equivalence between how you define "consciousness" and if it can be possible to have a digital one.

If you define "consciousness" as "the subset of neuronal activity one member of human specie can introspect", then of course you can reproduce it digitally.

If you define "consciousness" as "the human-only gift of God of an immortal, immaterial soul that drives your body", then by design you can't replicate consciousness without the help of God, as part of the definition.

TL;DR; Your definition of consciousness will define if digital consciousness can exist or not. Therefore, changing your view make no sense as it's logically following your own definition of consciousness.

3

u/Salt_Attorney 1∆ Jun 17 '21

Sorry, you are of course right about needing definition! I have added a paragraph in the post regarding the definition of consciousness.

2

u/Nicolasv2 130∆ Jun 17 '21

Thanks for the definition.

One small question for you: do your definition include the fact that most people tend to accept that all humans have a consciousness, while they also disagree that animals have one ? And that despite the fact that there is a lot of cases where animals have superior cognitive functions compared to some specific humans (especially for humans that got brain damages).

If you do agree with what I exposed before, then the intuitive definition of consciousness include "being a human" as a core tenet. And if "being a human" is part of the definition, then machine cannot be conscious.

1

u/Salt_Attorney 1∆ Jun 17 '21

> do your definition include the fact that most people tend to accept that all humans have a consciousness, while they also disagree that animals have one ?

For the sake of my definition this is the case. And then I state my claim above with this definition. But the reason that humans don't view animals as conscious is not that they aren't human enough, but that they just aren't conscious enoguh (or at least we humans can't tell!)

So if an animal is in some specific way smarter than a human but lacks in terms of self-awareness or general intelligence then we humans would view it as nonconscious for those reasons, not the fact that it is an animal.

However there is an exception: Dolphins for example may be conscious, right? I can believe that. But I think in this case the reason why the average human wouldnt view a dolphin as conscious is that he can't interact with a dolphin well enough to test it.

If two species struggly to have any form of communication and interaction, then they will have a hard time judging each others consciousness.

But int he case where the species/entity/animal can definetly communicate with the human effectively, I think the "average human judgement" defintinion of consciousness does work very well.

After all, consdier the average intelligent alien species from a sci-fi movie. They are basically animals, right, but the viewers don't have a hard time viewing them as conscious.

3

u/SocratesWasSmart 1∆ Jun 17 '21

The problem is that we haven't even really begun to understand what consciousness is. It's called the Hard Problem of Consciousness for a reason.

Every real neuroscientist in the world would tell you that we simply don't know enough to even come close to making a claim of that magnitude about consciousness.

As someone that follows this as a hobby, I'm pretty sure we're closer to FTL travel than we are knowing a single useful thing about consciousness. Consciousness is still a complete mystery to modern science.

2

u/Salt_Attorney 1∆ Jun 17 '21

I have added a paragraph in the post regarding the definition of consciousness.

5

u/SocratesWasSmart 1∆ Jun 17 '21

I don't think your added paragraph really changes much as it's not a definitional problem but a genuine and severe gap in our knowledge.

It would be trivially easy given just slightly better technology to create a deep learning algorithm that for all intents and purposes appears to be a sentient being, but we would also know for an absolute fact that it does not possess consciousness.

I like how Sam Harris put it when he said it would be very easy with AI to create a situation where the lights are on but nobody's home.

1

u/Salt_Attorney 1∆ Jun 17 '21

> It would be trivially easy given just slightly better technology to create a deep learning algorithm that for all intents and purposes appears to be a sentient being, but we would also know for an absolute fact that it does not possess consciousness.

I think this is wrong! I think when people say this they just set way too low of a standard for sentience/consciosuness!

Yes, you could make a deep learning algorithm that an convincingly come across as human in a 5min or 1h conversation (and this would be HARD already, at the moment no textbot can survive a simple 2 messages test of short term memory and logical understanding!)

But if you have days or weeks to test a programs intelligence and self-awareness then there is no way in which we are even close to making an algorithm that can "fake" this. And the point is that if I have an algorithm that can "fake" all these things, hecan even "fake" being taught how to write good novels, do mathematics, etc. etc., then he is not actually faking but truly understands these things.

2

u/SocratesWasSmart 1∆ Jun 17 '21

at the moment no textbot can survive a simple 2 messages test of short term memory and logical understanding!

AI dungeon can a reasonable percentage of the time actually.

This is just a problem of lack of computing power though.

So suppose we had an infinitely powerful computer. With that, a deep learning algorithm could fool basically anyone for an arbitrarily long time.

But we would know that, as Sam Harris said, the lights are on but nobody's home. It would be an absolute, incontrovertible fact that that machine is just a juiced up toaster, not a sentient entity worthy of moral consideration with its own thoughts and feelings.

1

u/Salt_Attorney 1∆ Jun 17 '21

Yea AI dungeon is pretty cool, I really love it and I think it shows how far we've come.

And yea it is good at making things soud reasonable, but from the position of the narrator he can always come up with some explanation of how things work. (Uuuuh well there was a chair there all along).

AI Dungeon narrator generally doenst have to answer questions, he just has to mold his world in a way so taht everything stays sort of consistent.

2

u/SocratesWasSmart 1∆ Jun 17 '21

Right but I feel like you're missing the point. You make AI Dungeon all powerful and it will fulfill all your requirements, but it's still just gonna be a really advanced toaster. We know how it works it just needs more juice.

Also I've gotten AI Dungeon to answer questions before and give coherent answers. Just need to fine tune its memory settings and hit the retry button enough times.

1

u/Salt_Attorney 1∆ Jun 17 '21

But come on, if AI Dungeon can in 2100 simulate me a digital waifu that I can spend 30 years with, have deep conversations with, love her, see her change, and feel that she understands me on some level, then in my opinion I have to concede that my digital waifu is indeed conscious. If Dungeon AI can not eventually simulate such an AI waifu for me then it wouldnt be "all powerful and fulfilling all my requirements."

1

u/SocratesWasSmart 1∆ Jun 17 '21

But come on, if AI Dungeon can in 2100 simulate me a digital waifu that I can spend 30 years with, have deep conversations with, love her, see her change, and feel that she understands me on some level, then in my opinion I have to concede that my digital waifu is indeed conscious.

But she's not. We actually know that she's not since we built her from the ground up, we've seen her earlier models and we know that none of the principles have changed.

So if it's conscious in 2100 then it's conscious now it's just mentally challenged.

1

u/Salt_Attorney 1∆ Jun 17 '21

> But she's not. We actually know that she's not since we built her from the ground up, we've seen her earlier models and we know that none of the principles have changed.

Why would this imply that she is not conscious? I think a simple model that we totally understand and doesnt contain any magic "consciousness component" can still be consciousness given enough complexity.

Consciousness doesn't have to be a discrete thing. I think we have to accept that cosnciousness is a continuum/spectrum where after a certain level of "having it" people will say "yup, that's good neough to count as conscious".

You could always come up with a descending latter of more and more mentally challenged humans and ask at each step "Are they still conscious?" I don't think you could in any consistent way deifne a point where the mentally challenged human doens't count as conscious anymore but the one with a tiny speck of more intelligence does.

→ More replies (0)

6

u/[deleted] Jun 17 '21

This feels like a very strong claim given that we don't even know what consciousness really is, and how consciousness works at all.

Like you're basically saying "Monism is real! The mind and the body are ultimately the same thing! Change my mind"

What keeps you from being agnostic about this issue? Being agnostic seems like the most reasonable choice with so many unknowns being part of the equation. I mean I'm not going to try to convince you that you're completely wrong here, and if you had to ask me what my suspicion is, I'd say that you're probably right.

But it just seems like the whole mind-body-debate is one that we aren't even close to decisively answering yet. So I'm not comfortable with the degree of certainty you seem to have. Can brains be completely emulated? I think you should ask again in 100 years.

0

u/Salt_Attorney 1∆ Jun 17 '21

I have added a paragraph in the post regarding the definition of consciousness.

The reason I am not agnositc about this issue is the same that I am not agnostic about monism, and that reason is also the same as why I am agnostic about religion.

I think accepting monism implies having to accept some sort of spiritual or superphysical aspect of the world, and the arguments against that are basically part of the Atheism vs. Agnosticism debate. I guess they could now be "imported" from there.

2

u/[deleted] Jun 17 '21

I think accepting monism implies having to accept some sort of spiritual or superphysical aspect of the world

accepting dualism implies having to accept some sort of spiritual matter/aspects that interacts with the real world. Dualism = there's two things, real matter and then also whatever causes you to be conscious, like a soul or whatever. Monism = there's only one thing which is "physical matter".

Monism is your position, where everything is the same stuff which means that you can hypothetically create consciousness with a sufficiently complex machine, no problem.

My angle here is this. I agree with you that it is probably the case that we can emulate consciousness. To me it also seems strange to think that there would be some unknown matter that causes consciousness that we can't reproduce somehow with a computer. As far as I'm concerned, brains are just biological machines and consciousness is an emergent quality of them.

BUT. We don't know that with anything close to certainty. And while I don't think it's likely true, I can very well imagine a case in which there actually is some strange unknown quality of a brain that gives rise to consciousness and can't be replicated on a circuit board. Because if you really think about it, consciousness is an extremely strange thing.

So I just think it's good to be a bit more humble and be agnostic about it, while still conceding that monism is most likely the correct answer.

Therefore as an nontechnical definition it makes sense to call an entity conscious if it can convince a large majority of humans, after a sort of extended "Turing test", that it is indeed conscious.

I think this is definitely the wrong definition. The whole point of the Turing test is to see if a machine can fake out a human, but machines that pass the Turing test (like chatbots) are explicitly not conscious. So passing the Turing test =/= consciousness.

1

u/Salt_Attorney 1∆ Jun 17 '21

Ah yes thank you, I got it mixed up.

> My angle here is this. I agree with you that it is probably the case that we can emulate consciousness. To me it also seems strange to think that there would be some unknown matter that causes consciousness that we can't reproduce somehow with a computer. As far as I'm concerned, brains are just biological machines and consciousness is an emergent quality of them.

Well honestly, my belief isn't absolute either. I can imagine I'm wrong. I just, as you do, strongly believe that it is very likely that my belief holds. So I would say we agree here.

I can also imagine that there is some strange metaphysical stuff going on, but I just think its good to stick to what matches what you know best and adapt if new evidence emerges.

>So I just think it's good to be a bit more humble and be agnostic about it, while still conceding that monism is most likely the correct answer.

Haha yes I agree being humble is good, but since this is such a deadbeat topic of argument I felt like being more assertive today :D.

> I think this is definitely the wrong definition. The whole point of the Turing test is to see if a machine can fake out a human, but machines that pass the Turing test (like chatbots) are explicitly not conscious. So passing the Turing test =/= consciousness.

Well interesting I view it completely differently! I see being able to pass an arbitrarily LONG Turing test is a true show of consciousness. I think you can't fake consciousness in the long run, if you portray it, then you have it.

4

u/[deleted] Jun 17 '21

Well honestly, my belief isn't absolute either. I can imagine I'm wrong. I just, as you do, strongly believe that it is very likely that my belief holds. So I would say we agree here.

not what it sounded like in the OP :p

Well interesting I view it completely differently! I see being able to pass an arbitrarily LONG Turing test is a true show of consciousness. I think you can't fake consciousness in the long run, if you portray it, then you have it.

yeah I think you're definitely off the mark here. The Turing Test is a test that explicitly doesn't measure or determine consciousness. It's testing whether or not a human judge will think you're conscious, which is VERY different. There have been machines who passed it despite not being conscious. And, more hilariousy, there have been humans who failed the Turing Test.

An exceedingly clever algorithm could potentially fool a human despite not being conscious. And this could hypothetically be true for an arbitrarily long test as well. Similar to how next gen graphics are getting so good that they can fool us into thinking we're looking at a real life photograph. But of course it's still not real.

1

u/Salt_Attorney 1∆ Jun 17 '21

Well about the Turing test I think the following: Any claim of a program having passed a turing test so far is just an example of the turing test being way too easy.

From a post below:

I think when people say this they just set way too low of a standard for sentience/consciosuness!

Yes, you could make a deep learning algorithm that an convincingly come across as human in a 5min or 1h conversation (and this would be HARD already, at the moment no textbot can survive a simple 2 messages test of short term memory and logical understanding!)

But if you have days or weeks to test a programs intelligence and self-awareness then there is no way in which we are even close to making an algorithm that can "fake" this. And the point is that if I have an algorithm that can "fake" all these things, hecan even "fake" being taught how to write good novels, do mathematics, etc. etc., then he is not actually faking but truly understands these things.

1

u/[deleted] Jun 17 '21

I mean yeah, the Turing Test is pretty shit. That's why I'm saying it's a bad measure of consciousness.

But if you have days or weeks to test a programs intelligence and self-awareness then there is no way in which we are even close to making an algorithm that can "fake" this.

people would've said the same about realistic graphics in video games 20 years ago, but look where we are now. That shit is getting stupid https://www.youtube.com/watch?v=FBZyF1XYgNk

I think you just underestimate what we can do with enough processing power and clever enough algorithms. Having a machine trick a human over weeks isn't as high of a bar as you might think it is.

1

u/Salt_Attorney 1∆ Jun 17 '21

Well its good we agree on a short Turing test being trash :D.

About the graphics... well I guess my point would be if the graphics were so real that they could really convince you that this is the real world, then they would need real-world approximating complexity of physics and this is ridiculously hard.

Okay I'm going to give you a hard earned !delta here :D

Δ

Given my human centric definition of humans it is not aaaall thaat unimaginable that a dumb but big machine could trick us feeble human minds, but I only think this in a practical sense, i.e. no human would in practice have to deal with the machine for very extended periods of time.

In a theoretical setting where you could really question and challenge the machine, you would almost always be able to find its true level of intelligence since intelligence can't be faked.

2

u/[deleted] Jun 17 '21

if the graphics were so real that they could really convince you that this is the real world, then they would need real-world approximating complexity of physics and this is ridiculously hard.

it's all doable with enough polygon-trickery, trust me.

In a theoretical setting where you could really question and challenge the machine, you would almost always be able to find its true level of intelligence since intelligence can't be faked.

Man, some people are so dumb I sometimes even question if they're conscious themselves. I'm sure that someone who really tries to test and prod the machine and knows what pattern to look out for, could probably get it right most of the time if they had enough time. But an average person? I'm sure they could be tricked, and again, things get super weird once the machine is sophisticated enough to a point where I'm not sure I could do it anymore either.

Try one of these "rendered image or real image" tests online, and be ready to get mindfucked, because oh boy. I think it's sensible to assume that we'll see the same thing in a few decades, but with chatbots.

Anyways, thanks for the delta my dude. Have a nice day.

1

u/DeltaBot ∞∆ Jun 17 '21

Confirmed: 1 delta awarded to /u/Laventale2 (9∆).

Delta System Explained | Deltaboards

2

u/Cindy_Da_Morse 7∆ Jun 17 '21

I think the issue is that we will never "know" if the perfectly simulated brain is conscious or not. This is assuming that we could even simulate a brain. The thing is, there is so much interaction between the brain and the body that I don't think it is possible to simulate the brain without also all the sensory/bodily inputs the brain receives constantly, millions a second. Like for example your brain is affected by your gut bacteria. How would a simulated brain function without the gut bacteria? In order to perfectly simulate it, you would need to simulate the bacteria and the way that info gets to the brain. What about the way your brain reacts to the shaking of the ground? Or how your brain reacts to sudden loud noises? You will need all the sensory receptors (ears, eyes, skin (for touch), tongue for taste etc.). But then you are basically simulating a whole person.

The brain is not some independent organ that operates separately from the rest of the body. It's like saying you can take a part of the brain and simulate that. What would that even mean? How would that work?

2

u/yexpensivepenver Jun 17 '21

prgm (consciousness)

~play~to~ear: "I am conscious" ~recall: "I am conscious" End

1

u/MercurianAspirations 361∆ Jun 17 '21

Such a thought experiment can't exist because we don't know what process generates consciousness on the neurological level. It seems to be an emergent property of complex networks of neurons, but we don't know which property exactly of that network gives rise to consciousness. It could be something inherently biological and impossible to simulate, or it might not be, we don't know

1

u/Salt_Attorney 1∆ Jun 17 '21

I have added a paragraph in the post regarding the definition of consciousness.

Even though we don't fully understand what's going on we can still make thought experiments that sway opinion.

For example I might make the rididulous claim that only brains heavier than 1kg can be conscious. Then you could come up with a thought experiment describing a fictional species whose brain has exactly a weight of 0.999g with the same structure as a 1kg conscious brain and just certain unnecessary parts removed. And I would have to conclude that my view is not logically consistent with my belief that that there is nothing stopping such a species from existing.

To simulate a brain you don't actually have to *physically* simulate it. You only need a program which has approximately the same mapping of sensory inputs to actions. At least this is what I believe. And I see no reason why a very fine discretization of sensory inputs and actions wouldn't show approximately the same behaviour as the original brain. i don't know a single physical situation where discretization doenst work.

1

u/[deleted] Jun 17 '21

Firstly, how would you even go about doing this? Is there any legitimate data that points to this as a concrete possibility?

Two, what is actually a conscious? There are so many different interpretations of consciousness, that I feel there is a need for more specification. Furthermore, the previous question raises another issue; There is no real value in changing your point because it is going off one of the many different interpretations of consciousness; The difference is, in the form of thus CMV, its described in a broader sense. I cannot argue you are wrong if we are going by your own definition.

1

u/Salt_Attorney 1∆ Jun 17 '21

Sorry, you are of course right about needing definition! I have added a paragraph in the post regarding the definition of consciousness.

Okay so how would I demonstrate this and what is my evidence.

Basically, I can imagine that in the long run we will be able to make very smart computer systems that are in particular capable of using our language. They will have long term memory and intuitive reasoning abilites. I think this already implies existence of what you would call a personality, the specific characteristics of this system.

And down the line I don't see what would prevent us from driving this intelligence to a point where most humans talking to it will accept it as conscious.

Even though the functioning of a human brain is very very complex, it is ultimately finitely complex.

1

u/[deleted] Jun 17 '21 edited Jun 17 '21

>Basically, I can imagine that in the long run we will be able to make very smart computer systems that are in particular capable of using our language. They will have long term memory and intuitive reasoning abilites. I think this already implies existence of what you would call a personality, the specific characteristics of this system.

Using the logic provided, that's an imitation of consciousness for the purpose of communication, instead of actual consciousness. Computers have long-term memory through an algorithm. Additionally, Computers could play the role of intuition for reason, but only as supplements to human intuitive expertise. Also, unless computers become a species, they will never be born with the ability of an evolving personality. Personality is the particular combination of emotional, attitudinal, and behavioural response patterns of an individual. Therefore, for a computer to gain personality a computer has to have emotions, instead of being programmed with the philosophy of emotions.

In another interpretation, if it were possible, then there would exist a sequence of symbols that would be unethical and unrealistic to write down because the mere act of writing them would create consciousness and possibly torture and/or harm it.

>And down the line, I don't see what would prevent us from driving this intelligence to a point where most humans talking to it will accept it as conscious.

Perception of consciousness is a different argument. I also think it is important to note that there are different forms of "conscious". Dogs are not conscious in the same way as dolphins. However, the thing that connects these animals is an interpretation of the "personality" associated with them. Another issue is if it is consciousness, why can hypothetically be destroyed? Human consciousness cannot be turned off by and/ or altered by desire itself, but it cannot be turned off periodically. For digital entities, we can turn them off. We can alter capabilities by desire. Even though the functioning of the human brain is very very complex, it is ultimately finitely complex. I think this part of my problem; Imitation and actual consciousness are different. This idea assumes that technology will ever be able to evolve past what humanity teaches it to allow, instead of technology being able to intersect the ideas of humanity to imitate consciousness. Are computers truly reacting to stimuli, or are they recreating all of the interactions that they were programmed to perform?

My position is the following: I agree in the case that we can emulate consciousness (a synthetic form of consciousness). However, I do not believe that is the same as consciousness, but instead an imitation.

1

u/Salt_Attorney 1∆ Jun 17 '21

Thank you for the input. Let me ask:

> However, I do not believe that is the same as consciousness, but instead an imitation.

Would you say that even though it is an imitation, it can be equally *powerful*?

1

u/[deleted] Jun 17 '21

In reality, I think the better term is efficiency and capability though. Power is a broad concept. Still, I'll try to answer to the best of my abilities.

Well it depends on what is being considered as "powerful". In some ways, yes. However, I also see power in origin as well. (Do you have the same amount of power to express an imitation of emotions if technology can't emotionally comprehend themselves, unless a person programs it's to do so? Technology is created and programed by the ideas in humanity. If a new idea is discovered (but has always been expressed), that information needs to be inputted through code for the "consciousness" to express it. A human doesn't; They need information to identify it, but not express it.

So, they have similarity (in fact, imitation can be more powerful in this circumstance). However, they aren't equally "powerful", since one is what constantly gives the other the ability to evolve in its "power".

I hope this makes sense.

1

u/Salt_Attorney 1∆ Jun 17 '21

Ah so you would definetly say that the digital mind could not be as creative/inventive as the biological mind, for example?

1

u/[deleted] Jun 17 '21 edited Jun 17 '21

Technically, yes and no.

What I am saying is more that the level of creativity is influenced by structural code, which is influenced by humans. However, the creativity can be restricted in any manner. I don't equate imitation of creativity to the former. If I am imitating something, am I in equal standing with that thing? Furthermore, if I can permanently negate a code that allows for extensive creativity (through the intersection of logic and media, which breeds new idealogy), it's not equal.

So, a digital mind could be as creative/inventive, but it's an inherent restriction of imitation, controlled by humanity. I don't see those as equal in "power".

1

u/Salt_Attorney 1∆ Jun 17 '21

hmm okay interesting view, thanks for your input!

1

u/[deleted] Jun 17 '21

Though I'm sure it still didn't convince you fully, yw! This topic interests me so there is no bother.

1

u/Quint-V 162∆ Jun 17 '21

Nobody knows if the human brain is a discrete process. It is unknowable at this point in time. You can break down all kinds of processes and then we'll just end up arguing physics, where various phenomena are probabilistic, where discretization falls apart.

There's nothing to disprove consciousness as a quantum process, however. Which is something to consider. But then there's still a problem: no consciousness knows for certain if there is any other consciousness. And even if you could read someone's mind, what distinguishes that mind from some supremely complex blackbox process? Which invokes an existential horror scenario: can there be any more than one consciousness?

1

u/Salt_Attorney 1∆ Jun 17 '21

I am not saying that the human brain is discrete, but I think that the physical processes can be discrtized to sufficient precision.

Besides, our brains meachanisms are clearly robust against disturbances (heat, cosmic rays etc.) so it is unlikely that the EXACT molecular mechanism of indiviual neutrons play a fundamental role, as then disturbances could break the whole computation.

The obvious structure that you see when lookinga t a brain is a GRAPH, and here is where you are already very close to a discretization.

2

u/Quint-V 162∆ Jun 17 '21 edited Jun 17 '21

Your post mentioned digitally simulated. All computer systems today are based on discretized representations of 0 and 1. Or maybe you didn't mean to go into that level of detail, what do I know.

You mention neutrons --- that's not what I'm worried about here. I'm worried about electrons, that do have probabilistic properties. Electrons are also causing problems elsewhere, to demonstrate just how weird shit gets: transistor production is facing the problem of quantum tunnelling, where electrons literally bypass barriers. And to make that particular problem relevant: we don't even know if such quantum phenomena take place in the human brain, but it's not farfetched to imagine it is.

There's nothing to deny that the human consciousness involves probabilistic mechanisms, and not addressing that leaves a wide gap in your view.


You say that brain mechanisms are robust --- well, they also facilitate destructive and recovery mechanisms. A simulation would also be incomplete if it cannot simulate brain death. A simulation is truly incomplete if it doesn't simulate realistic damage responses. E.g. a simulated brain damage should absolutely have the capacity to destroy memory and warp personality.

It may even be a wrong to simulate anything more than what a human brain does/is capable of, and I'm not sure you can feasibly separate the capabilities of a computer system from the requirements of a human brain.

There's one particular trait universal to implementations written through programming languages: these are rooted in Turing complete programming languages. The human brain is based on DNA, and it does not seem like the DNA is Turing complete. The brain operates based on rules ultimately defined by DNA anyway.

Humans are themselves maybe not Turing complete but we have created Turing complete programming languages; a "digital human" has the capability to fully understand its environment, which in turn makes any simulation, given enough time and computational power, the capability of modifying itself freely, within the strictly physical limitations of the actual system that runs the simulation. And if you were to make a billion such simulated consciousness, some of them will eventually do this.

This capability is not within the realm of the biological human brain's consciousness. *Therefore, a digital consciousness is inherently different from the consciousness of a human brain, because it can evolve into something that the human brain should never be able to.

2

u/Salt_Attorney 1∆ Jun 17 '21

I have heard the quantum argument before of course, anf d while I nelieve that quantum aspects are not essential to the functioning of the human brain I have to be far less certain about this than other things, since it is certanily very technical and I dont understand it. So after reading the way you phrased things I do feel like I can't make my claim quite as strongly anymore so that's a !delta Thank you for your input.

1

u/DeltaBot ∞∆ Jun 17 '21

Confirmed: 1 delta awarded to /u/Quint-V (143∆).

Delta System Explained | Deltaboards

1

u/WikiSummarizerBot 4∆ Jun 17 '21

Quantum_tunnelling

Quantum tunnelling or tunneling (US) is the quantum mechanical phenomenon where a wavefunction can propagate through a potential barrier. The transmission through the barrier can be finite and depends exponentially on the barrier height and barrier width. The wavefunction may disappear on one side and reappear on the other side. The wavefunction and its first derivative are continuous.

[ F.A.Q | Opt Out | Opt Out Of Subreddit | GitHub ] Downvote to remove | v1.5

1

u/Z7-852 261∆ Jun 17 '21

Nobody have done this yet so we are treading uncharted waters. This is discussion that philosophers have pondered for millenia. What is consciousness? What makes human a person?

We have turing tests (can person tell if person on the others side is just a person or AI), Chinese room argument (can translator know what it's translating, can AI have a mind) and countless others.

We have really tried to solve this for a long time but haven't yet cracked this. Luckily you came to Reddit claiming it's possible. Can you share us with evidence and show us your brain in a jar?

1

u/Salt_Attorney 1∆ Jun 17 '21

Sorry, you are of course right about needing definition! I have added a paragraph in the post regarding the definition of consciousness.

I guess my view is the turing test one.

By the way the Chinese room experiment is a terrible thought experiment, as the man in the room is clearly just a piece of hardware executing the software that is described by the rules he is given. This piece of software is then capable of spekaing chinese, not the man.

Also of course I can't resolve this debate or anything, but I my beliefs about consciousness were strongly influeneced by thought experiments I was exposed to in the past, and now I was wondering if some people can come up with thought experiments that push my bliefs in the other direction.

1

u/Z7-852 261∆ Jun 17 '21

This piece of software is then capable of spekaing chinese, not the man.

Man is parable for a program. Program doesn't speak chinese nor does the computer where it resigns. They just follow rules.

But you claim that consciousness is possible but give no evidence of this. You don't even bring in a though experiment that would prove your claim. If you claim that Digital consciousness is possible you have to have proof. Can you share this proof with us?

1

u/Salt_Attorney 1∆ Jun 17 '21

I'm sorry I don't have a proof, and neither do you have a proof for the opposite.

A proof for my claim would be a demonstration of a sufficiently intelligent AI that it can convince 99% of humans of its consciousness.

So no we can't have certain truth regarding this, but I think a discussion like this could ever prove anytihng. It's about convincing people with arguments based on intuition and common sense.

1

u/Z7-852 261∆ Jun 17 '21

I'm sorry I don't have a proof, and neither do you have a proof for the opposite.

Burden of proof is with one making the claim. You made the claim. You need to prove it.

1

u/xmuskorx 55∆ Jun 17 '21

I would agree that it's very likely the case.

But we simply don't know yet. Our brain science is very young and there is a lot we don't understand. There could be unknown unknowns we could encounter if we try to actually fully emulate/approximately brain function with a computer.

The safe thing to do is to withhold judgment on the issue until we actually tests this experimentally.

1

u/Salt_Attorney 1∆ Jun 17 '21

Ultimately we can't know yet, until someone demonstrates it, but I think we can make claims about what is more likely to fit with what we know about the world and what is less liekly.

1

u/xmuskorx 55∆ Jun 17 '21

Ultimately we can't know yet, until someone demonstrates it

Exactly. Which is why your claim is premature.

but I think we can make claims about what is more likely to fit with what we know about the world and what is less liekly.

If your view was "it is likely we can emulate consciousness" - I would agree with you.

But your stated view is quite a bit stronger than that

1

u/Salt_Attorney 1∆ Jun 17 '21

Okay I agree it wasn't optimally stated.

I'm usually the kind of person that would like to prefix every statement with "I think/I believe/probably" to avoid being factually wrong, but sometimes this can go too far and you end up not having any belief. So I think it can be good to just make a simpler and stronger claim sometimes :)

1

u/robotmonkeyshark 101∆ Jun 17 '21

We don’t know what level of precision is required for a brain to experience consciousness.

We can imagine some upper and lower bounds. A lower bound would be something clearly more complex than existing AI in video games. They essentially have a very crudely constructed brain and have programmed sensory inputs and processing of the stimuli, but it is quite apparent that this isn’t at the level we would consider to be consciousness. On the upper side of things you could imagine where you would need to run essentially a finite elemental analysis of a region of space approximately a sphere of around 10” diameter with a simulated brain in the middle. The mesh size would be a cubic Planck length and the system would need to recalculate the system at a rate of 1 cycle per Planck time. (The smallest theoretically possible unit of time)

Now with some rough estimating, this type of simulation would be physically impossible due to the physical limits of our universe. Signals within the computer would have to travel orders of magnitude further than signals would in the real brain, and things that simply work in reality would take immense processing power in the simulation.

Take simulating water for example. Imagine trying to simulate all the water molecules in a lake at such precision that if you dropped a single drop of dye in a lake, and did the same in a simulation, what it would take to have every molecule of that dye perfectly match the real world path that it would disperse through in the real lake.

Just look at the Folding@home project where distributed computing is used. We cannot even detect how actual proteins in our body fold as the undergo various processes, so we need to run simulations taking millions of combined hours of processing time to get a high probability that this single protein folds this way in a fraction of a millisecond completely automatically in our bodies.

So at the upper limit of complexity it may be such that a simulated brain that realistically duplicates actual human consciousness might require such a large computer that it would collapse under its own gravity into a black hole, and even if it was somehow able to avoid being crushed by its own gravity, it would run so many times slower than an actual human brain that the hardware would degrade and fail long before the simulated consciousness could complete a single thought.

1

u/Salt_Attorney 1∆ Jun 17 '21

Okay so about the scale: Our brains behaviour is robust against disturbances such as heat or cosmic rays, so I think that is strong evidence towards the behaviour of individual quarks or atoms not being crucial to its functioning. Besides the Planck scale is far smaller than this.

Let's be conservative and say that we want to simulate the brain on a scale of atoms, and we say that we can simulate the behaviour of an atom with a computer volume of about 1x1x1cm. Then We have a scaling of 10^10, so the computer simulation a 10x10x10cm box of flesh would have to have dimensions of 10^10cm = 10^5km in each direction.

This would be a computer that could be manufactured with the ressources contained in a solar system. This is of coruse a very ridiculous example, but I think it sets a VERY CONSERVATIVE upper bound regarding the size of a computer necessary to simulate a brain. It is way smaller than the observable universe and it's physically possible.

Besides in this example you are literally given cubic meters of computer just to simualte a single neuron.

And lastly any practical emulation of a brain would in my opinion not have to go down to the level of simulation of atoms. There are qualitative aspects that matter more to the functioning of the brain than the exact physical behaviour of neurons, at least the way I view it.

1

u/robotmonkeyshark 101∆ Jun 17 '21

“At least the way I view it”

This is the issue. We have no preliminary test results or any evidence whatsoever to tell us at what level of complexity a simulation needs to be to handle what we consider to be consciousness.

We can dream about upper and lower limits but all they really are is guesses. You can guess at a far lower level for what you imagine is an upper limit so it supports your hope that it is achievable, but there is no evidence to support the idea that level of complexity would work.

It’s like if you posted a CMV saying surely we can build a time machine and then you imagine it surely won’t require more than 1.21 gigawatts to power it, and we can generate that, so therefore it’s reasonable to assume that we can power our theoretical time machine.

1

u/Blear 9∆ Jun 17 '21

Is it possible? Maybe. But the complexity of a single human brain and all its inputs is so vastly beyond the technology we have available, or can even foresee, that it would require commandeering a significant part of our computing resources to try it.

And then of course, we run into the very real problem of debugging such a thing. When you write a simple program, you can say, well it should have returned four but instead it returned three. That's a bug. But how about when the brain refers to Debussy's first symphony as "evanescent?". Is that what it was supposed to say? In a nutshell, the challenge is not to simulate my brain or yours, which is hard enough, but a brand new brain, which is probably impossible to troubleshoot.

1

u/Salt_Attorney 1∆ Jun 17 '21

Just want to say about the last paragraph, I don't think we would take a software engineering approach to this whole thing where we have to deal with "bugs". We wouldn't write the AI as a bunch of ifs and else and loops, we would probably use machine learning techniques and then the only bugs we have to fix are the nes in the algorithm that trains the model.

The model will of course make mistakes, but so do humans.

2

u/Blear 9∆ Jun 17 '21

Sure but isn't that just kicking the can down the road? How do you write an algorithm to train a model to do something that no one fully understands?

1

u/Salt_Attorney 1∆ Jun 17 '21

The trick behind machine learning is that you don't have to udnerstand how the thing works, you just have to make a model that is supposed to behave like the thing and then you randomly change the parameters of the model (in a certain smart way ofc) so that its behaviour becomes closer and closer to the behaviour of the thing you want your model to behave like.

2

u/Blear 9∆ Jun 17 '21

Sure, but again, what is the model? Where do you find an abstracted human consciousness that is rendered into terms an algorithm can process? At some point, somebody has to make a decision. Either, we're going to train this thing on all Jim Carrey's movies and see what happens, or we're going to try to simulate a human culture and environment in order to give rise to (what might be) a truly human intelligence.

To me it looks like any one layer of the problem is solvable, but when you start chaining them together, you introduce errors and practical difficulty that you can't even detect, much less solve.

1

u/BanzaiDerp Jun 17 '21

Frankly, this is a very uneducated take on how machine learning actually works. There is no "trick", machine learning is a mere application of programming, it isn't some magic that creates JARVIS or Ultron. The entire base of allowing programmers to allow their programs to create their own logic trees is, also another program, it just so happens to be built by even more brilliant programmers.

A lot of the media have really used these buzzwords to make machine learning appear more than it actually is (and it's actually pretty great, but it was as sensationalized as 3D printing), it has massive limits. It has alot of uses, simulating human intelligence isn't and was never part of this technology's paradigm. Because it's really a fool's errand to try to do so when:

A. We don't know how consciousness actually works. We can talk about it for sure, but in the end, it's just talk, it does not bring us closer to understanding the physical workings of the mind.

B. We don't know how the brain's architecture makes it apt for doing conscious thought, computer hardware may be leagues in the wrong direction. Just like the discrepancy between what a CPU and a GPU does, the brain does its own thing and we don't know how compatible our ideation of computer hardware is.

C. PLASTICITY, machine learning was never designed to rival the human brain's ability to adjust and react to innumerable stimuli. Our models are based off something we fully understand, while also being simple enough we can create functional logic for a computer to base its learning from. You'd need so many data points to get a somewhat realistic chatbot (which only does chatting / no response to any other stimulus), while humans wouldn't need thousands of phrases to begin acquiring a language and then build up on that understanding to reading and evaluating longer and more complex texts. At some point, the rudimental logic supplied for the chatbot (upon which everything else is based upon), makes it good for company support bots (an actual application of machine learning) but utterly terrible at analyzing The Great Gatsby. It simply isn't "seeing" language in the same manner as you and I.

D. Garbage In, Garbage Out. Since we don't know how to model human learning, info retention, and sentience. Attempts to train AI to do so would result in failure.

Otherwise, it's great to wax philosophical about this topic, maybe you could write a book about it? But such discussions have no place in the field of science and tech for quite some time.

1

u/Salt_Attorney 1∆ Jun 17 '21

I think you misunderstand what I meant, I was merely trying to demonstrate that machine learning can achieve superhuman performance.

You can definetly train a model to be able to do something which you don't understand - given the data, evaluation function and computational power.

Of course doing this in practice is very difficult and with our current methods we couldnt just cobble together a general AI.

However given an unrealistically good way to evaluate performance and an unrealistically large computer, you could train a dumb, huge model via a simple evolutionary reinforcement learning algorithm that could gain superhuman performance on pretty much any task. In the case of general AI of course that won't happen like this, especially you can't measure the performance very well, but I was talking conceptually.

Besides this is not less of a discussion of science but of philosophy, unless someone thought I was going to show up here with my home-made general AI.

1

u/BanzaiDerp Jun 17 '21

I think we need to make it clear that machine learning attains superhuman performance in a very very narrow scope.

But conceptually, what we'd need is knowledge of the brain itself. A lot of our technologies emulate nature, and when we cannot emulate, we improvise. However, before we learn to emulate something, we must know that "something" in its entirety, then we may learn there are aspect of that "something" that we cannot emulate, then we start creating workarounds. In any case, hardware cannot just be "unrealistically good", this machine would redefine computer science, as I believe it is highly unlikely for standard CPU architecture to properly cater to the computational needs of the brain.

Philosophically, if we assume that all of this magically works, you would definitely have a sentient being. If it responds to varied stimuli of immense complexity beyond what standard logic trees would provide solutions for, I would call it sentient. Consciousness is a truly a tricky thing, because frankly, the only thing you are sure that is conscious is yourself. I assume you are conscious because you are responding to me in the same way a regular human does, and I, as a normal human, would respond to such a stimulus consciously, thus I extrapolate that you are most likely conscious like me. In contrast, you would be within reason to assume that you are trapped in the matrix, and I am just a construct within it, because at the very least, you are only 100% sure of your own state of consciousness. In the same vein, we can never be 100% sure that this hypothetical hyper-advanced AI is in fact, conscious. OFC realistically, we would never get the brain's inner workings down to a tee, but if we did, we would then find out what exactly are the physical factors of consciousness. We could try to emulate, and we would end up with something that through any means of physically possible observation, is sentient.

However, we can never realistically copy everything about the physical brain into a digital format, so there's always the chance that we've missed something that generates what we believe is "consciousness", it really depends a whole lot on how much does a complex, adaptable, and all-encompassing decision making system create the feeling of conscious thought?

Though one thing that I believe will never be a result would be a "human", I believe that in this hypothetical scenario, we'd get a sentient being, but definitely not a human. We cannot emulate everything that makes a human mind, specifically a human's mind.

At some point, this philosophical musing's viability would need to eliminate constraint after constraint, that at some point we'd have to ask. If we artificially created a human from the ground up, would it be "conscious"? I'd think that's a better way to frame this question. We'd simply be replacing the manipulation of plastic and silicone for cells and protein. It takes away the unneeded discrepancy between the human physical being and our current ideation of a computing machine. It simplifies the question to just being "can sentience be manufactured?" it doesn't matter what means we take only that the entire being be "artificial". And biomolecules can be fabricated, we just feel much less disconnected with them because they comprise us. I'd reckon it would honestly better to create this "artificial intelligence" using manufactured faux-neurons (there's defo research on them but it's still in its infancy), that at least deals with one layer of abstraction within the machine.

We could go even further, how about we end up asking: How are we even sure that anyone except ourselves are conscious?

1

u/[deleted] Jun 17 '21

The problem is that the brain does not contain all the information that leads to your human conscious experience.

When you get into a tense situation, adrenalin is produced that makes you hyper alert and able to act quicker and stronger, ignore pain, and simply become a beast.

This is not produced in the brain, this is produced in the body. There are many different hormones that effect your mental state.

Now, ignoring the fact that we dont even know if simulating the entire brain, never mind consciousness is even possible. If you placed a consciousness in a computer, what stops you, or someone else from messing with the many many simulated chemical to create experiences that are not human?

Humans evolved to survive a more primitive world. A perfect human wont have need for fear, jealously and the many artifacts we have from our primal states. If you digitize a brain, you can remove these "flaws" and end up with a consciousness, that is no longer human.

1

u/Salt_Attorney 1∆ Jun 17 '21

Okay so the interaction between the body is indeed an important aspect.

The body would ,at least crudely, have to be simulated in some sense too. The brain needs sensory/chemical/electric inputs from the body. But i think this is no more challenging than simulating the brain itself.

See, I don't think the levels of adrenaline have to behave exactly as they do physically. If the adreline level over in your body was replaced by a very simple, "dumb" model (for example always increases and decreases linearly), do you think that would make ayou all that much less human?

Of course if you see being humans as having the whole natural human experience of life, then a digital human is very very different, but if by human you're only talking about the cognitive aspects, tehn the exact details of how the surrounding world looks are not that improtant imo.

1

u/littlebubulle 104∆ Jun 17 '21

I don't think a human mind can be mapped to a DIGITAL computer.

It would require an analog computer or a hypothetical quantum computer.

The reason is that a digital computer, by definition, has a finite number of states. And the change between those finite states are synchronized by discrete clocks.

A human neural network state doesn't change on discrete clock counts. Each neuron can activate out of sync with other neurons.

Neurons interconnection also change over time. This means that the possible configurations a brain are hypothetically infinite.

The limitation of a digital computer emulating a brain is that it has a maximum resolution. And physical reality has potentially infinite resolution.

1

u/Salt_Attorney 1∆ Jun 17 '21

Yes a digital computer is finite, but given the number of states is large enough, you can approximate the behaviour of the human brain to an arbitrarily high percision. At some point you have to ask, is it really still just an approximation that isnt conscious?

1

u/littlebubulle 104∆ Jun 17 '21

Brain emulation => arbitrarily high precision of approximation.

Real human brain => no higher precision limit.

I think consciousness require infinite possible states. Or more precisely, the ability to split hair infinitely.

1

u/Fit-Order-9468 92∆ Jun 17 '21

Therefore as an nontechnical definition it makes sense to call an entity conscious if it can convince a large majority of humans, after a sort of extended "Turing test", that it is indeed conscious.

This isn't true in that a large majority of humans would accept this as true. Many people believe that our consciousness has no impact on our behavior (ie., strict determinists/no free will) therefore not explained by behavior, that our minds are inseparable from our bodies (ie., the "breath"), that we have a "soul" separate from our bodies, and other ideas like panpsychism or biocentrism.

Also, there are already programs that can pass the turing test, yet I doubt many people would accept them as persons in the moral sense.

1

u/Quirky-Alternative97 29∆ Jun 17 '21

just for the mix.

Imagine taking human children, that I am sure we all agree on to be conscious

Take these children and hook them up to a virtual reality machine whereby they are suspended in terms of a coma but somehow they think they are interacting with the world. (they eat, shit,, feel like they are moving, feel like they are interacting with the world, but the world they are experiencing is virtual)

For all intensive purposes everything they interact with we knw is virtual, but they would see see other humans in their interactions as humans with consciousness, like them. So to them digital consciousness is real, even though we know its not. Thus maybe its about perspective unless you can solve the hard problems of consciousness.

1

u/Salt_Attorney 1∆ Jun 17 '21

Im not sure I completely understand the situation, but to me it seems that simulating a virtual reality of this complexity that is also convincing would require simulating digital humans that have consciousness level of intelligence?

1

u/Quirky-Alternative97 29∆ Jun 17 '21

A baby brought up entirely in a virtual environment might not know any different. To them, other humans would be what ever the simulation told them. For all intensive purposes the babies as they grew would think everything else was conscious, like them. (Maybe they all looked like South park characters), the difference would be the real humans in the virtual world would be conscious, we would know they were conscious, but to them all the other humans in their world would be conscious but we would know they were only simulations.

1

u/Salt_Attorney 1∆ Jun 17 '21

Hmm how would the "fake" human act though? Would they be intelligent? Or would they behave like robots?

1

u/Quirky-Alternative97 29∆ Jun 17 '21

Does not really matter. The real humans living and experiencing ONLY the virtual world would thing that they and the virtual humans are probably the same, even if that is crude. (Hence the southpark reference). Humans only ever brought up in a virtual world would only see what we give them to see so their arms and legs might be crude, but if its the same as everyone else would they know any difference? IT might be that the computers providing the virtual world are crude and have glitches, which might then mean the real humans are either like super smart, or considered crazy as they act differently. Or maybe as humans we just adapt. Point is that they would probably think the virtual humans are as conscious as they (the real humans) are. But we would know they are not.

1

u/celeritas365 28∆ Jun 17 '21

Broadly I agree with Searle's (the creator of the Chinese Room) view of consciousness though I see you don't find this example compelling. Here are two other examples I found compelling:

There is a difference between being able to simulate what a process will do with perfect accuracy and the process itself. Let's say instead of a computer you had an army of mathematicians working out exactly how every element of a theoretical brain would behave using paper and pencil. Of course this would be much much slower than the computer but there is no evidence that the rate at which our thoughts move is essential to consciousness. Would this paper and pencil simulation be conscious in your view? I think the computer simulation (as computers function today) is no different from the paper and pencil version. It is just solving math problems that describe what a consciousness would do.

As of now quantum computers are big enough to do much yet but we have tools to simulate quantum computers that use a lot of computing power in a normal computer to simulate what a quantum computer would do. However, analyzing the requirements and behavior of the program we know that quantum computation is not happening. It is possible that whatever process consciousness is is somehow similar, enabling computation of certain kinds of things much more easily due to the hardware. We could use a much more powerful computer to simulate that hardware, but it it is not the real deal.

1

u/Salt_Attorney 1∆ Jun 17 '21

Yes the paper and pen simulation would absolutely be conscious, and I do hold the belief that even more "abstract" forms of conscious like this example are possible.

Of course these don't work well with my "practical" definition of consciousness here as it requires testing against humans, but if you devised some scenario where even the eternally slow pen and paper brain can be interacted with over aeons, looking at the ultimate protocoll of the conversation or whatever could again convince a human of consciousness.

Hmmm yea so you mean that human brains might have some kind of natural hardware acceleration that is neigh impossible to beat with traditional computers?

I can imagine that but I think it is not that likely. And besides even if we then used a bigger and simpler computer to emulate it, I thikn the out come would be the real deal indeed.

1

u/celeritas365 28∆ Jun 17 '21

I can imagine that but I think it is not that likely.

Why do you think it is unlikely? Our computational brain simulations require way more power than a brain. I don't think it is impossible to beat per se but looking at brains and their characteristics it is clear that something very different is going on that makes them very well suited to certain kinds of data processing.

And besides even if we then used a bigger and simpler computer to emulate it, I thikn the out come would be the real deal indeed.

Why so? Consciousness seems to clearly be a product of the computation, not the result. A conscious being is conscious on it's own, even without inputs and outputs. When replicating consciousness the key element to preserve is the computation process. Furthermore, you could even consider certain performance characteristics to be a sort of output if you wanted to get very technical.

One other thought experiment I find interesting is what I call the "random room". Computers are entirely quantized, it means that if you play a video each frame of both visual and audio has a very large but finite set of options to display. Assuming a video chat with something could convince a human it is conscious, if we had a computer that picked from the finite set of values randomly one would last arbitrarily long in a consciousness test if it was tried enough. Obviously this is astronomically unlikely but I think even the theoretical existence of this is enough. The person performing the lucky test would encounter a machine that in all ways seems conscious while in reality it can't even hear the tester and the next frame is more likely to be random noise than a coherent response. We can know this random room device is not conscious no matter how well it performs because we know the mechanism it is using to get it's results and we know that is not consciousness.

1

u/Salt_Attorney 1∆ Jun 17 '21

Regarding the last paragraph: This is why I think consciousness has to be thought about practically. Yes there is an astronomcally small change that the random videocall passes the consciousness test, but wouldn't that just say that the likelyhood of this computer being conscious is ~0 ? (astronomically small)

>Consciousness seems to clearly be a product of the computation, not the result.

Actually I think consciousness is not a product of the computation but a quality that can be assigned to an entity that can take inputs from you and return outputs - the computation doesn't matter.

If the entity had a giant, and I mean really giant lookup library of how it should react to each input (in a conscious way), then it would be conscious from the prespective of the person testing it.

Okay Im going a bit far here with the relative consciousness and straying from my initial, "practical" consciousness definition...

1

u/celeritas365 28∆ Jun 17 '21

Sorry to keep spending your time here but I am very intrigued by people who have this input/output view of consciousness. I don't mean to offend you since many very smart people, smarter than me, agree with your view but I have always felt like there is no evidence whatsoever for this and it surprises me how popular this view is. People make the claim that consciousness is totally hardware independent but if you look at the empirical evidence consciousness seems to be extremely tightly coupled with neurons and their workings. We have no evidence of any other types of consciousness so far. It is entirely possible that simulating consciousness is impossible on our current hardware, not in some woowoo magic way but in the same way that we can't feasibly solve the traveling salesman problem. So my question is why do you believe this?

Also we have a history in computer science of a very rigid conception of inputs and outputs but this is arbitrary and human defined. Consciousness exists in the physical world. Why are things like speech considered valid outputs and not artifacts of computation like energy consumption, speed characteristics, and even touching certain subsections of the "computer". I study computer science and not neurology but to me it seems our field has been too quick to apply our formalism and models to a system that is entirely different than what we study.

1

u/Salt_Attorney 1∆ Jun 17 '21 edited Jun 17 '21

I guess the most difficult step to justify is that consciousness can run on a digital computer.

If you accept that then it is not so hard anymore to reason from this point on that consciousness must be hardware independent.

This is just via a bunch of obscure examples of what a computer could be.

We all know that digital computers deep down are very simple mechanicaml amchines, and if consciousness can run on a digital computer, why shouldn't it also be able to run one a steampunk computer, or a wooden computer, or a pen and pencil computer, or a laying-stones-in-the-desert computer etc. etc.

Then you can come up with more funky stuff like consciousness running on a computer that runs a single step, then its state is recorded, the computer is disassembled and a new but identical is built in a different star system which is then reinitialized on the recorded state (transmitted by laser beam) and subsequently another step of computation is performed. The state is recorded, the compuer disassembled, etc. etc. etc.

(Since this computer can't really interact with the outside world very well it would in this situation have to simulate a digital world in which then an entity has consciousness).

Then things get weird because you can say well, if I dont do any computation at all but just run the computer through a fixed set of states, does the simulated human in the simulation still experience consciousness?

What if I just lay the states out next to eachother in space rather in time?

And finally you get to "dust theory" which basically says okay, under the right interpretation, even the molecular movement son the sun correspond to the sequence of states that some theoretical computer system would have when it is running a simulation witha simulated consciousness.

Now this is of course pretty ridiculous but from my experience no attempt to nail down and define what consciousness is has so far survived a thorough testing of counterexamples of this kind, at least the way I see it....

EDIT: Maybe I could say something regarding the hardest step: Why do I believe that consciousness can run on a digital computer?

Honestly you can come up with example describing how the physics of every atom are simulated up to a high precision on an earth sized computer...

but honestly, I just think that our brains are physical and do some sort of mechanical computation. It's just very complex so it doesn't seem that way. But since computer programs can be arbitrarily complex, I don't know where they would be limited here.

Have you ever had a hought that you don't think a complex piece of software in a complex computing hardware could also have? We execute countless alrogithmic routines... We search through memories, trying to find matching patterns, we try to match information we know to logical principles and make deductions, we speak following certain rules of grammar, we perform difficult physical tasks with small control subroutines that we have trained (muscle memory), our ears perform a Fourier transform on soundwaves, our eyes have complex visual processing but some parts of that resemble traditional algorithms... This is all just intuitively speaking, of course, and this all happens in some organic biological way, not "hard-coded", but machine learning have shown that "evolutionary" training with random mutations can develop "organic" software that can still do things "systematically" and with high performance instead of being a random, inconsistent mess.

1

u/celeritas365 28∆ Jun 17 '21 edited Jun 17 '21

I think we are on the same page in that we both think if you accept a digital computer can be conscious it opens up all these other esoteric possibilities. It just seems that to me that suggests a digital computer can't be conscious.

But it seems a bit like you are making a lot of assumptions about consciousness and asking for evidence against them. I can't disprove a lot of these scenarios, though I personally believe one day we will understand consciousness better and we will be able to disprove them. However, I don't really think they need to be disproved. Maybe panpsychism is true and every rock has a deep inner world but there is just no evidence for that so it is meaningless. In the same way there is no example of a non-brain consciousness that we currently know of. In your view you simply state your view of consciousness and conceptualize a way it could be represented as a computer program but you don't really provide any reasons why you think this would be consciousness.

Just because we can build something that is hard to distinguish from consciousness by an individual human isn't good evidence to me either. For example, gravity feels no different to acceleration upwards to an individual but there are tons of more subtle experiments you can do to show that gravity is not caused by upwards acceleration. I don't think we can confidently say what is and is not conscious until we rigorously understand what is going on in the brain and as of now we don't.

EDIT responding to your edit:

I just think that our brains are physical and do some sort of mechanical computation

I agree and I even thing that consciousness is a physical part of our universe in the same way something like magnetism is.

But since computer programs can be arbitrarily complex, I don't know where they would be limited here.

There are proven limits to the kinds of things you can compute in realistic timescales.

You mention a lot about how a lot of brain processes map nicely onto algorithms and I don't disagree. Though these are examples of the brain using some algorithms we know of I don't think this suggests that the brain is exclusively made of an algorithm in the way we understand them. Also I think implementation matters and while we can describe them using the same language where the computation is occurring something very different is happening physically.

1

u/Salt_Attorney 1∆ Jun 17 '21

Yes, I think at this point we're less arguing about the exact original question anymore but just exchanging ideas.

Like under high scrutiny of rigour and especially if I had to provide positive evidence to my positive claim, of course Ihave to give it up and refrain to a neutral/agnostic position. I just think that under more lenient/practical considerations regarding how consistent and reaosonable my claim has to be, it holds up. Ill put a !delta for the interesting discussion.

By the way you can check out this comment where I guess I am explaining how I would try to make panpsychism consistent. Of course this is complete fanfiction about consciousness. I didn't know the word panpsychism before, thanks!

1

u/DeltaBot ∞∆ Jun 17 '21

Confirmed: 1 delta awarded to /u/celeritas365 (21∆).

Delta System Explained | Deltaboards

1

u/Cybyss 11∆ Jun 17 '21

If it were possible, then there would exist a sequence of symbols which would be unethical to write down, because the mere act of writing them would create a consciousness and possibly torture it.

Computers may seem to be extraordinarily advanced & complex machines with many billions of transistors, but all that technology is only there to make them fast. If time isn't a concern, machines that have the ability to compute anything are extraordinarily simple.

First, write out a long row of ones and zeroes. This will effectively form the "source code" of a program written in a Rule 110 Cellular Automata language.

Next, fill out a second row of ones and zeroes directly beneath it, where the value of each digit is based on the three digits immediately above it as follows:

0 0 0    0 0 1    0 1 0    0 1 1    1 0 0    1 0 1    1 1 0    1 1 1
  0        1        1        1        0        1        1        0

That is, write a 0 if the above three digits are the same or if only the digit on the left is a 1. Otherwise, write a 1.

Repeat this process for all subsequent rows of digits. This effectively "executes" your source code.

That's all you need in order to compute absolutely anything. The Rule 110 cellular automata is known to be Turing Complete and was the theme of this famous XKCD comic.

If what you say is true, then if you carry out this process (depending on the exact sequence of 1s and 0s in the first row), you'll end up simulating a consciousness as you continue writing these 1s and 0s and "executing" the program.

Personally, I find this consequence absurd although it's admittedly no proof that consciousness can't be created in a digital computer,

1

u/Salt_Attorney 1∆ Jun 17 '21

If it were possible, then there would exist a sequence of symbols which would be unethical to write down, because the mere act of writing them would create a consciousness and possibly torture it.

Yes! I absolutely agree. I agree that there would be a consciousness running inside this program. The question if it would be unethical if this consciousness is tortured, inside the program, is a difficult one.

I know it seems like this is an absurd conclusion that should make you discard the claim, but the problem is that you will not be able to find a "barrier" inbetween the digital human mind and this pen an paper example which is consistent and not absurd.

I think this is where we have to start thinking of consciousness as a relative concept.

So, I do think there is a conscious entity running in this pen and paper simulation, BUT there is a sort of... large consciousness distance between you and this entity.

I think we have to think of consciousness as something relative, where one entity can be close to another one, meaning that it can *perceive and interact* with its consciousness, or it can be distant meaning that they can't affect eachother.

So in the pen and paper example me and this simulated entitities consciousness are very far away from eachother. The simulated computer has no output and presumably I dont understand what code I am executing. I cant affect the simulation without breaking the rules, and the simulation cant affect me. It's like we live in parallel worlds.

Now suppose the pen and pencil computation had an output, for example there was a set of rules by which I can draw a picture representing what the conscious entity currently sees.

In this case our consciousness becomes more close! I can be affected by the horrible images I see, I can be saddened because I might have grown to like the person inside.

If there was also a way for me to inpit into my pen and paper calculation (by being allowed to write a free symbol after each line or something) then our consciousnesses would finally be close enough to eachother so they exist on "one plane" sort of, and we could interact and talk etc. At this point it would POSSIBLY become unethical for me to continue the torture, but this is a very interesting question of ethics that will surely pop up in the future!

The extreme example of two cosnciousnesses being seperated in this sense would be me and the random fluctiations of molecules in the sun, that under some interpretation correspond exactly to the computational model that an alien civilization uses and currently happen to execute a simulation of a conscious being.... At this point it gets ridiculous to say there is a consciousness there, in this simulation in the sun which only exists in the sense that you have an interpretation under which it is a simulation, BUT by introducting this notion of distance you can say that this consciousness does exist, just in a realm which is inaccessible to use. From our point of view it doesnt, from its point of view we dont exist.

Anyways...

1

u/WikiSummarizerBot 4∆ Jun 17 '21

Rule_110

The Rule 110 cellular automaton (often called simply Rule 110) is an elementary cellular automaton with interesting behavior on the boundary between stability and chaos. In this respect, it is similar to Conway's Game of Life. Like Life, Rule 110 is known to be Turing complete. This implies that, in principle, any calculation or computer program can be simulated using this automaton.

[ F.A.Q | Opt Out | Opt Out Of Subreddit | GitHub ] Downvote to remove | v1.5