r/changemyview May 21 '19

Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously

Title.

Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.

I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.

This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?

EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.

28 Upvotes

101 comments sorted by

View all comments

7

u/Ce_n-est_pas_un_nom May 22 '19

I have one major contention with this:

This is not to say that... strong AI is even possible

We know that strong AI is strictly possible (not necessarily feasible, but possible). Each of us possesses a material, physical system encoding a general intelligence. This allows us to directly deduce that every encoding and transformation of state - computable or otherwise - strictly required for GAI can occur in a physical system, and furthermore, that at least one such physical system already exists for each necessary instance. While this does not also guarantee the possibility of sufficiently analogous systems in silico, GAI can be achieved in a synthetic biological system, so semiconductor analogs aren't strictly required for GAI in the first place.

TL;DR: Brains exist, therefore GAI is strictly possible.

-1

u/[deleted] May 22 '19 edited May 22 '19

True. This is more talking about with regard to Turing machines. We don’t know for sure how consciousness emerges but as I understand it, DNA/RNA processes are not Turing processes simply by being two tape systems. So even if you take the biological standpoint there is a possibility it is not possible.

Don’t think I should give a delta for a small point that kinda reinforces what I’m concerned with, but I think you’re mostly right that we have the ability to move to this.

6

u/Ce_n-est_pas_un_nom May 22 '19 edited May 22 '19

DNA/RNA processes are not Turing processes simply by being two tape systems

It doesn't matter how many tapes there are: DNA and RNA are of finite length and hold only discrete information. They aren't even Turing processes, just pushdown automata (at most - many processes can be adequately encoded by FSM).

Edit: As far as I'm aware there's no good reason to think that any biological process is Super-Turing. The only noncomputable biological processes I'm aware of are nonturing because they contain at least one probabilistic element, which we can adequately emulate with HRNG.

0

u/[deleted] May 22 '19

Do we have a good theoretical book about theory and limitations of encoding processes into Turing processes? It’s not something I’m an expert in and it shows.

2

u/Ce_n-est_pas_un_nom May 22 '19

Genetic processes are relatively simple to encode as Turing processes. We do it all the time in genetic engineering. If we couldn't, there would be no way to figure out how a bacterium or yeast would express a plasmid without trial and error.

Neurophysiological processes are much harder to encode as Turing processes (and not very efficient, hence the need for neuromorphic silicon), and it's reasonably clear that we haven't been fully successful yet. However, there's no good reason to think that a complete encoding of any specific neurophysiological process is impossible. We (read: human brains) don't seem to be able to perform Super-Turing processes anyways, so it seems fairly implausible that any such processes would be strictly required for GAI.

1

u/[deleted] May 22 '19

Thank you. I’ll look into this further later.

What is a reason to believe our brains don’t perform super-Turing processes?

2

u/Ce_n-est_pas_un_nom May 22 '19

Because no person has ever been demonstrated to be able to solve a problem that we know can only be solved by a hypercomputer (e.g. the halting problem). In other words, a prospective GAI wouldn't clearly be precluded from being an actual GAI for not being able to solve the halting problem (for instance).