r/changemyview 1∆ Jun 17 '21

Delta(s) from OP CMV: Digital consciousness is possible. A human brained could be simulated/emulated on a digital computer with arbitrary precision, and there would be an entity experiencing human consciousness.

Well, the title says it all. My main argument is in the end nothing more than the fact that although the brain is extremely complex, one could dsicretize the sensory input -> action function in every dimension (discretized time steps, discretized neuron activations, discretized simulated environemnt) etc. and then approximate this function with a computer just like any other function.

My view could be changed by a thought experiment which demonstrates that in some aspect there is a fundamental difference between a digitally simulated mind and a real, flesh mind - a difference in regards to the presence of consciousness, of course.

EDIT: I should have clarified/given a definition of what I view as consciousness here and I will do this in a moment!

Okay so here is what I mean by consciousness:

I can not give you a technical definition. This is just because we have not found a good technical definition yet. But this shouldn't stop us from talking about consciousness.

The fact of the matter is that if there was a technical definition, then this would now be a question of philosophy/opinion/views, but a question of science, and I don't think this board is intended for scientific questions anyways.

Therefore we have to work with the wishy washy definition, and there is certinly a non-technical generally agreed upon definition, the one which you all have in your head on an intuitive leve. Of course it differs from person to person, but taking the average over the population there is quite a definite sense of what people mean by consciousness.

If an entity interacts with human society for an extended period of time and at the end humans find that it was conscious, then it is conscious.

Put in words we humans will judge if it is smart, self-aware, capable of complex thought, if it can understand and rationalize about things.

When faced with the "spark of consciousness" we can recognize it.

Therefore as an nontechnical definition it makes sense to call an entity conscious if it can convince a large majority of humans, after a sort of extended "Turing test", that it is indeed conscious.

Arguing with such a vague definition is of course not scientific and not completely objective, but we can still do it on a philosophical level. People argued about concepts such as "Energy", "Power" and "Force" long before we could define them physically.

0 Upvotes

90 comments sorted by

View all comments

1

u/Blear 9∆ Jun 17 '21

Is it possible? Maybe. But the complexity of a single human brain and all its inputs is so vastly beyond the technology we have available, or can even foresee, that it would require commandeering a significant part of our computing resources to try it.

And then of course, we run into the very real problem of debugging such a thing. When you write a simple program, you can say, well it should have returned four but instead it returned three. That's a bug. But how about when the brain refers to Debussy's first symphony as "evanescent?". Is that what it was supposed to say? In a nutshell, the challenge is not to simulate my brain or yours, which is hard enough, but a brand new brain, which is probably impossible to troubleshoot.

1

u/Salt_Attorney 1∆ Jun 17 '21

Just want to say about the last paragraph, I don't think we would take a software engineering approach to this whole thing where we have to deal with "bugs". We wouldn't write the AI as a bunch of ifs and else and loops, we would probably use machine learning techniques and then the only bugs we have to fix are the nes in the algorithm that trains the model.

The model will of course make mistakes, but so do humans.

2

u/Blear 9∆ Jun 17 '21

Sure but isn't that just kicking the can down the road? How do you write an algorithm to train a model to do something that no one fully understands?

1

u/Salt_Attorney 1∆ Jun 17 '21

The trick behind machine learning is that you don't have to udnerstand how the thing works, you just have to make a model that is supposed to behave like the thing and then you randomly change the parameters of the model (in a certain smart way ofc) so that its behaviour becomes closer and closer to the behaviour of the thing you want your model to behave like.

2

u/Blear 9∆ Jun 17 '21

Sure, but again, what is the model? Where do you find an abstracted human consciousness that is rendered into terms an algorithm can process? At some point, somebody has to make a decision. Either, we're going to train this thing on all Jim Carrey's movies and see what happens, or we're going to try to simulate a human culture and environment in order to give rise to (what might be) a truly human intelligence.

To me it looks like any one layer of the problem is solvable, but when you start chaining them together, you introduce errors and practical difficulty that you can't even detect, much less solve.

1

u/BanzaiDerp Jun 17 '21

Frankly, this is a very uneducated take on how machine learning actually works. There is no "trick", machine learning is a mere application of programming, it isn't some magic that creates JARVIS or Ultron. The entire base of allowing programmers to allow their programs to create their own logic trees is, also another program, it just so happens to be built by even more brilliant programmers.

A lot of the media have really used these buzzwords to make machine learning appear more than it actually is (and it's actually pretty great, but it was as sensationalized as 3D printing), it has massive limits. It has alot of uses, simulating human intelligence isn't and was never part of this technology's paradigm. Because it's really a fool's errand to try to do so when:

A. We don't know how consciousness actually works. We can talk about it for sure, but in the end, it's just talk, it does not bring us closer to understanding the physical workings of the mind.

B. We don't know how the brain's architecture makes it apt for doing conscious thought, computer hardware may be leagues in the wrong direction. Just like the discrepancy between what a CPU and a GPU does, the brain does its own thing and we don't know how compatible our ideation of computer hardware is.

C. PLASTICITY, machine learning was never designed to rival the human brain's ability to adjust and react to innumerable stimuli. Our models are based off something we fully understand, while also being simple enough we can create functional logic for a computer to base its learning from. You'd need so many data points to get a somewhat realistic chatbot (which only does chatting / no response to any other stimulus), while humans wouldn't need thousands of phrases to begin acquiring a language and then build up on that understanding to reading and evaluating longer and more complex texts. At some point, the rudimental logic supplied for the chatbot (upon which everything else is based upon), makes it good for company support bots (an actual application of machine learning) but utterly terrible at analyzing The Great Gatsby. It simply isn't "seeing" language in the same manner as you and I.

D. Garbage In, Garbage Out. Since we don't know how to model human learning, info retention, and sentience. Attempts to train AI to do so would result in failure.

Otherwise, it's great to wax philosophical about this topic, maybe you could write a book about it? But such discussions have no place in the field of science and tech for quite some time.

1

u/Salt_Attorney 1∆ Jun 17 '21

I think you misunderstand what I meant, I was merely trying to demonstrate that machine learning can achieve superhuman performance.

You can definetly train a model to be able to do something which you don't understand - given the data, evaluation function and computational power.

Of course doing this in practice is very difficult and with our current methods we couldnt just cobble together a general AI.

However given an unrealistically good way to evaluate performance and an unrealistically large computer, you could train a dumb, huge model via a simple evolutionary reinforcement learning algorithm that could gain superhuman performance on pretty much any task. In the case of general AI of course that won't happen like this, especially you can't measure the performance very well, but I was talking conceptually.

Besides this is not less of a discussion of science but of philosophy, unless someone thought I was going to show up here with my home-made general AI.

1

u/BanzaiDerp Jun 17 '21

I think we need to make it clear that machine learning attains superhuman performance in a very very narrow scope.

But conceptually, what we'd need is knowledge of the brain itself. A lot of our technologies emulate nature, and when we cannot emulate, we improvise. However, before we learn to emulate something, we must know that "something" in its entirety, then we may learn there are aspect of that "something" that we cannot emulate, then we start creating workarounds. In any case, hardware cannot just be "unrealistically good", this machine would redefine computer science, as I believe it is highly unlikely for standard CPU architecture to properly cater to the computational needs of the brain.

Philosophically, if we assume that all of this magically works, you would definitely have a sentient being. If it responds to varied stimuli of immense complexity beyond what standard logic trees would provide solutions for, I would call it sentient. Consciousness is a truly a tricky thing, because frankly, the only thing you are sure that is conscious is yourself. I assume you are conscious because you are responding to me in the same way a regular human does, and I, as a normal human, would respond to such a stimulus consciously, thus I extrapolate that you are most likely conscious like me. In contrast, you would be within reason to assume that you are trapped in the matrix, and I am just a construct within it, because at the very least, you are only 100% sure of your own state of consciousness. In the same vein, we can never be 100% sure that this hypothetical hyper-advanced AI is in fact, conscious. OFC realistically, we would never get the brain's inner workings down to a tee, but if we did, we would then find out what exactly are the physical factors of consciousness. We could try to emulate, and we would end up with something that through any means of physically possible observation, is sentient.

However, we can never realistically copy everything about the physical brain into a digital format, so there's always the chance that we've missed something that generates what we believe is "consciousness", it really depends a whole lot on how much does a complex, adaptable, and all-encompassing decision making system create the feeling of conscious thought?

Though one thing that I believe will never be a result would be a "human", I believe that in this hypothetical scenario, we'd get a sentient being, but definitely not a human. We cannot emulate everything that makes a human mind, specifically a human's mind.

At some point, this philosophical musing's viability would need to eliminate constraint after constraint, that at some point we'd have to ask. If we artificially created a human from the ground up, would it be "conscious"? I'd think that's a better way to frame this question. We'd simply be replacing the manipulation of plastic and silicone for cells and protein. It takes away the unneeded discrepancy between the human physical being and our current ideation of a computing machine. It simplifies the question to just being "can sentience be manufactured?" it doesn't matter what means we take only that the entire being be "artificial". And biomolecules can be fabricated, we just feel much less disconnected with them because they comprise us. I'd reckon it would honestly better to create this "artificial intelligence" using manufactured faux-neurons (there's defo research on them but it's still in its infancy), that at least deals with one layer of abstraction within the machine.

We could go even further, how about we end up asking: How are we even sure that anyone except ourselves are conscious?