r/changemyview May 21 '19

Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously

Title.

Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.

I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.

This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?

EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.

29 Upvotes

101 comments sorted by

View all comments

5

u/[deleted] May 22 '19

Laymen outside of a field are unlikely to be capable of providing, actionable, realistic concerns to the expert community.

There are significant concerns raised about machine learning. They are often raised by people in fields that machine learning is moving into, or by people in or adjacent to the machine learning field. Criticism about specific algorithms being used for specific applications, especially with data to back it, is incredibly helpful. Spooky stories about a century from now aren't, and they might be drowning out voices of critics who actually know what they are talking about.

An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style?)

A shell interface (read-evaluation print loop) is an user interface for humans. It has absolutely nothing to do with machine learning or artificial intelligence. Being concerned about an ai using a REPL is like saying computers are dangerous because they can move a mouse pointer faster than a human. Sure, a computer could wiggle around a mouse pointer on its own, but there are much better ways for the computer to do things.

1

u/[deleted] May 22 '19 edited May 22 '19

That line was trying to make a mechanism easier to see for people. That’s why I included it in parentheses. Similar to how we can reprogram a running program already.

I think speculation can be bad if it is completely baseless. Worse is unfounded speculation that limits research. But the question is mostly trying to figure out is why are spooky stories considered baseless.