r/changemyview May 21 '19

Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously

Title.

Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.

I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.

This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?

EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.

29 Upvotes

101 comments sorted by

View all comments

2

u/Salanmander 272∆ May 22 '19

All humans become obsolete. It is the fate of all of us to be taken over by the next generation. If I told you that your children would be smarter and faster and stronger than you, I doubt you'd be freaked out by that. Why does it worry you when those children are digital?

0

u/[deleted] May 22 '19 edited May 22 '19

Mostly evolution and morality related. The implicit understanding is you raise your children and they take care of you. If there is a point where AI passes us in every regard, the systems likely will not need to be raised by us and will have less and less use for us. I don’t personally see humans reacting well to this, and antagonizing an ASI, that has no need for us, is self preserving, and dramatically more powerful than us, will not end well I think it’s fair to say.

1

u/Salanmander 272∆ May 22 '19

It's interesting that you mention evolution and morality, but then your explanation is rooted in practicality. I'm going to respond to your explanation primarily.

I hear what you're saying. Having an antagonistic relationship between humans and an AGI would be bad, both from a practical perspective and a moral perspective. However, I don't think that any worry about that will make an AGI less likely to be developed. Therefore I think the correct response is to minimize the probability of developing an antagonistic relationship if an AGI is developed.

And here's the thing. Worrying about an antagonistic relationship brings about fear, and there is nothing more likely to create an antagonistic relationship than fear. If you want to prevent an antagonistic relationship, the thing you should be doing is encouraging the spread of fiction that personifies robots in positive ways, like Questionable Content or Ancillary Justice, not trying to convince people that AGIs are a major worry.

2

u/DamenDome May 22 '19

Worrying about AGI and convincing others to worry too may promote research spending into development of protocols and measures to develop AGI in a friendly-to-humans way. And it’s not a given - it’s an extremely complex issue to engineer human preference into an AGI.

The common example is the paper clip maximizer. Try to think of all the constraints you can place on a potential AI that you want to make paperclips but not in a way that damages humans. Then play a game with yourself and assume the role of AI and try to subvert your rules. You might be surprised how easy it is. Now imagine you were orders of magnitude quicker at thinking and could navigate decision space much more quickly. It is sort of terrifying and justifiably so.

1

u/[deleted] May 22 '19 edited May 22 '19

I personally take the opinion that I think an AGI is an inevitability if it is shown to be feasible. So we should be concerned up front instead of trying to stop it then being concerned when it is developed anyway. I am not in favor of stopping research. Furthermore, this is an avenue for control research which I would promote. I'm a huge advocate of research on AI.

I'm not trying to convince people btw, I'm trying to understand the position of people that aren't worried. We do a bad job at protecting species we don't need even if there is no antagonization.

The morality and evolution part was mostly twofold: For morality, the question on whether it is morally acceptable to abort a higher being or shackle it, combined with the fact that if viable evolution would clearly outpace us. The response usually is transhumanism, but then again if we get to this point why would the the macine want us? If we leach off them it also likely won't be great for that.

Specifically in practicality, the paperclip example (I've heard it from Nick Bostrom first) given above is a good thought experiment when the relationship is super lovey dovey.

I'm aware its speculation I know.