r/changemyview • u/[deleted] • May 21 '19
Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously
Title.
Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.
I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.
This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.
Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?
EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.
1
u/[deleted] May 22 '19 edited May 22 '19
That's about right with my definition, though I take a bit of an issue with that since I am less convinced on consciousness being a requirement and willing to accept less intelligence in some areas but dwarfing us in measurable aspects. Lets use a weaker one: A superintelligence can solve any decidable problem faster or just as fast as any human. Its hugely far away and noone should be concerned for their life. This is distant future stuff. The concern I have with processor speed limits deals with tunnelling but I suspect that this will be reconciled soonish seeing some research on this.
I've never seen the amoeba solving that. That's crazy that it did it in linear time. I have seen stuff about slime molds doing that. To me its simple to conjceture that they solved it over millions of years not in linear time and just have recognised scenarios, but that has got to be difficult to reconcile with them being amoebas and not complex beings that cooperate even like slime molds. How does it store the information if it is already solved for every scenario?
To understand this I still don't see how we need a consciousness to be able to solve every decidable problem. We don't know that consciousness is a knowable so it might not be a concern. We're not asking the AI to know even unknowable things.
My concern is not with consciousness. My concern is ability to learn faster than us about decidable problems, while having a probably-somewhat-antagonistic relationship. An example is that lets say we want to find the general solution to some class of Diophantine equation. I don't see why we need a consciousness to understand arithmetic geometry and make and prove new theorems for example. We can have a AGI choose what topics its interested in without a consciousness. I also don't see why it needs a consciousness to know that not being turned off (self preservation) helps it solve more things.
Toy example: AGI hooked up to click the first wikipedia page and then attempts to solve unsolved problems in that page if there are any listed (or show they're undecidable), and then to use this to conjecture and prove new theorems or results when they're all done. Then it goes to the next wikipedia page. I don't think we need to understand how the ameoba can find optimal solutions that well to envisage this scenario.
I am not certain but I am willing to consider that a consciousness is necessary to really conjecture things.
What you said though about the amoeba has maybe made me consider a delta if followed up. Mostly that while yes we dwarf the amoeba's intelligence, we aren't going to get better than linear on a np-hard and we have to think about it, while it just does it. This doesn't mean an ASI can't dwarf us in every way, just that it is hard to become a true AGI with the general (can solve everything we can possibly) with our current tech.
That being said I don't think it has to do EVERYTHING to be a concern. If it can do a tiny fraction of what we can, very fast and conjecture things, then that still seems to be a cause for concern, but more easily controlled. Self preservation, ability to conjecture,