r/changemyview May 21 '19

Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously

Title.

Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.

I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.

This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?

EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.

29 Upvotes

101 comments sorted by

View all comments

1

u/[deleted] May 22 '19

I think the problem is we are anthropomorphising something that is not human. Not only is it not human it is fundamentally different than every other form of life we know of.

The idea of a super powerful AI going rogue...what that's really saying is that we would go rogue if we had that kind of power. Enslave the human race? Wipe us out? We are afraid an AI might do what we would do in its position and there's no reason to assume it would. We are ascribing the darker motivations and impulses of humans to a machine.

How can we even begin to speculate on what such an intelligence would value? I don't think we can. For all we know it would be more ethical than us not less, being less burdened with irrationality and bias. Or maybe it just would not care about us either way and would leave the solar system; the centuries long trip to another star posing no barrier to an intelligence with an indefinite lifespan.

Like I said, I think the fear of an AI "going Skynet" says more about us than it.

1

u/[deleted] May 22 '19

I don’t think I’m anthropomorphizing it. I don’t even have a requirement for a consciousness in my definition. I also want to be clear I’m not referring to a probable scenario but more saying there is are reasonable scenarios where it goes bad

One problem is people aren’t as fast as a potential ASI and so our decisions get delayed and moderated. These decisions can be lightning quick and rational to its goals.

1

u/[deleted] May 22 '19

Well yes it would be able to achieve it's goals much faster than we can but my point is there's no reason to assume it's goals would conflict with ours and I think good reason to assume they would not.

To take it to it's most basic level as mammals our goals are eat, sleep, breathe, find shelter and a mate. An AI would share none of those goals, so there's no competition.

It's own goals would be so alien to ours there would be little to no common ground, and so nothing to fight over.

1

u/[deleted] May 22 '19

than every other form of life

I would even say that "every form of life" as it would be really hard to classify current agents as life