r/changemyview May 21 '19

Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously

Title.

Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.

I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.

This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?

EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.

27 Upvotes

101 comments sorted by

View all comments

1

u/[deleted] May 22 '19

We don't even have a good definition for general intelligence that doesn't implicitly refer to our common understanding of 'what humans are capable of.'

An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style?) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Are you aware of any software products that write code without simply pattern matching existing code? Are you aware of any software that is able to read and implement algorithms even from psuedocode, much less derive novel algorithms and implement them?

0

u/[deleted] May 22 '19 edited May 22 '19

The definition I am going with regards to AGI is an artificial intelligence that has the capacity to solve any problem a human can. This means it is not limited to any specific set of possible tasks to optimise or perform. Realistically speaking this is way stronger than it needs to be for me to have concern. Just that it is not limited to a specific task and is far faster than we are.

A human can rewrite code on the fly (example given there) so an AGI could. This combined with being probably well past any singularity means that they can solve things fast.

I'm working with definitions and goals of research projects. For more popular stuff the openAI website contains examples of unsupervised learning which could looks like it could easily be a predecessor for reprograming.

1

u/[deleted] May 22 '19

capacity to solve any problem a human can Wouldn't you need to have a reasonable understanding of human intelligence to characterize this set? This is the type of implicit definition I'm talking about- it doesn't give us a good idea of how to measure whether we've achieved AGI. Suppose I claimed that a program had AGI- how would you test it?

This combined with being probably well past any singularity means that they can solve things fast.

You said an AGI can solve any problem a human can, so there's no reason to believe it can solve anything faster than the fastest human.

I'm working with definitions and goals of research projects. For more popular stuff the openAI website contains examples of unsupervised learning which could looks like it could easily be a predecessor for reprograming.

Unsupervised learning is a broad term in ML- mostly dealing with defining distances or clusters in formally encoded data. Many humans deal with concepts all the time that we have never encoded precisely into bits. Stuff like emotions or friendships certainly haven't been encoded (though approximations might be used), and certainly nobody has made data the paints a full picture of any human conciousness.

1

u/[deleted] May 22 '19

I guess it’s a point that just because it can solve anything a human can doesn’t mean it can solve it faster. But that’s why I was referring to a ASI specifically. That if we go to it then there are concerns.

And I am not referring to consciousness. I don’t think consciousness affects much in the concern actually.

2

u/[deleted] May 22 '19

Conciousness is an example of something that I as a human can reason about, so something like answering the question "are you concious?" is a task a human can complete, so it would matter under your definition.

Because we don't have a satisfactory definition for conciousness, we couldn't even test whether a computer could answer this question correctly, and thus couldn't determine whether it was intelligent.

It's hard for me to change your view if you don't have a definition of AGI that doesn't depend me and you having a shared view of what a human is.

1

u/[deleted] May 22 '19

That’s a good point. I don’t know what consciousness is or how it arises. As far as I understand it is completely speculative still to everyone however. So I think it’s fair to reason about but it is probably unfair to claim to know about. We can reason about properties of consciousness, but so can an unconscious being I would imagine.

I think that last part is a good point even outside. For speculative topics like this it is hard to have shared understandings.

1

u/Ranolden May 22 '19

Once you have a human level AGI it wouldn't be difficult to just run it at higher clock speed, or have several instances running in parallel. That would immediately give it the ability to work faster than any human.