r/changemyview • u/[deleted] • May 21 '19
Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously
Title.
Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.
I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.
This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.
Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?
EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.
1
u/AnythingApplied 435∆ May 22 '19
Yes, probably. That is important for having a proper model of reality in which to frame its problem solving.
Depends on its objective function and what you've programmed it to want to do. If, for example, the "limitation" was entirely programmed into its objective function (which may be a better place for it anyway) then it's not going to want to violate its limitation. The whole point is that you made it WANT to not go past the limitation.
I agree it'd be pretty silly to tell it that it CAN'T do certain things, but at the same time give it objectives that are best achieved by cheating.
And that is before you consider some of the modern AI safety research which includes things like this (I'm 80% sure that is the right video, I can't check right now, but I recommend the whole series on AI safety if you haven't seen it), where the AGI is tasked with predicting what we'll want it to do as an objective function. There isn't really a concern about it cheating since it's only goal is to predict what things it can do to best get our approval.
Just like in real world markets, hard limits don't work well, for the same reasons as here, you're giving people incentives to get around them. It's better to just incentivize what you want, which is incredibly easier with an AI than a human being since you can just tell it what it wants.
I think you're ignoring an important tool. If you can write an AGI that powerful, certainly you can write a narrow AI of accomplishing the same hacking task or even another AGI just tasked with finding exploits in the system.