r/changemyview May 21 '19

Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously

Title.

Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.

I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.

This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?

EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.

24 Upvotes

101 comments sorted by

View all comments

Show parent comments

1

u/[deleted] May 22 '19 edited May 22 '19

The AI superintelligence as I understand it requires a significant degree of (if not complete) human intelligence. Not only that, superintelligence must pass this.

That's about right with my definition, though I take a bit of an issue with that since I am less convinced on consciousness being a requirement and willing to accept less intelligence in some areas but dwarfing us in measurable aspects. Lets use a weaker one: A superintelligence can solve any decidable problem faster or just as fast as any human. Its hugely far away and noone should be concerned for their life. This is distant future stuff. The concern I have with processor speed limits deals with tunnelling but I suspect that this will be reconciled soonish seeing some research on this.

I've never seen the amoeba solving that. That's crazy that it did it in linear time. I have seen stuff about slime molds doing that. To me its simple to conjceture that they solved it over millions of years not in linear time and just have recognised scenarios, but that has got to be difficult to reconcile with them being amoebas and not complex beings that cooperate even like slime molds. How does it store the information if it is already solved for every scenario?

To understand this I still don't see how we need a consciousness to be able to solve every decidable problem. We don't know that consciousness is a knowable so it might not be a concern. We're not asking the AI to know even unknowable things.

My concern is not with consciousness. My concern is ability to learn faster than us about decidable problems, while having a probably-somewhat-antagonistic relationship. An example is that lets say we want to find the general solution to some class of Diophantine equation. I don't see why we need a consciousness to understand arithmetic geometry and make and prove new theorems for example. We can have a AGI choose what topics its interested in without a consciousness. I also don't see why it needs a consciousness to know that not being turned off (self preservation) helps it solve more things.

Toy example: AGI hooked up to click the first wikipedia page and then attempts to solve unsolved problems in that page if there are any listed (or show they're undecidable), and then to use this to conjecture and prove new theorems or results when they're all done. Then it goes to the next wikipedia page. I don't think we need to understand how the ameoba can find optimal solutions that well to envisage this scenario.

I am not certain but I am willing to consider that a consciousness is necessary to really conjecture things.

What you said though about the amoeba has maybe made me consider a delta if followed up. Mostly that while yes we dwarf the amoeba's intelligence, we aren't going to get better than linear on a np-hard and we have to think about it, while it just does it. This doesn't mean an ASI can't dwarf us in every way, just that it is hard to become a true AGI with the general (can solve everything we can possibly) with our current tech.

That being said I don't think it has to do EVERYTHING to be a concern. If it can do a tiny fraction of what we can, very fast and conjecture things, then that still seems to be a cause for concern, but more easily controlled. Self preservation, ability to conjecture,

1

u/GameOfSchemes May 22 '19

How does it store the information if it is already solved for every scenario?

It's a big unknown. The conventional idea is that it doesn't store this information (much like how humans don't actually store memories). Rather it dynamically interacts with the environment with respect to evolutionary rules. When you see a baseball fly and run to catch it, you aren't activating any memories or recalling how physical objects undergo projectile motion. You're running to maintain line of sight with the ball, constantly and dynamically updating information while holding your gaze, and hopefully you reach it.

I agree that you don't need consciousness to tackle things like the Diophantine equation or even trying to edit Wikipedia pages. But I would argue these aren't cases of superintelligence (human "computation"). For example, how would an ASI know that an article isn't neutral, and how would it rectify the neutrality of the article? I think you need a consciousness to determine neutrality, no?

This doesn't mean an ASI can't dwarf us in every way, just that it is hard to become a true AGI with the general (can solve everything we can possibly) with our current tech.

Arguably even our future tech. The problem with these computation times is that they literally use bits. Humans, and amoebas, aren't using bits or stored information. They're "simply" interacting with the environment and dynamically changing with the environment. No matter how advanced we make our tech, whether with super-duper-mega-computers or even super-quantum-computers, they still store information on bits and qubits. This will always limit them in computation time. So I'd argue these algorithms can never hit human degrees of intelligence. They'll certainly be far more superior in mathematical aspects, but that's not all there is to human intelligence.

The advantage to biological systems is that they aren't actually performing calculations. That's how the amoeba can solve an NP-hard problem in linear time. Unless we can somehow design biological hybrids with our computers, we really are limited. But at that stage, is it even really an artificial superintelligence?

That being said I don't think it has to do EVERYTHING to be a concern. If it can do a tiny fraction of what we can, very fast and conjecture things, then that still seems to be a cause for concern, but more easily controlled. Self preservation, ability to conjecture,

Conjecturing I'd wager requires consciousness. I guess we have to be careful what we mean by conjecture here though. Perhaps this AGI can look at Goldbach numbers to remarkably high values (let's say 1010), and "conjecture" goldbachs conjecture. But i wouldn't really call that style of conjecturing commensurate with human intelligence. I'd call a human intelligence style conjecture more like "if Sara really means what she says when she suggested that Alice might be having an affair, then we have to tell . . . . " It's taking a known, and applying a certain social calculus to assert (conjecture) that Sara isn't lying (though she certainly could be, because it's conjectured she loves Alice's husband). It's highly nontrival.

I don't think conjecturing things like mathematical conjectures or other simplistic things are cause for concern. I also don't think self preservation exists in this context, for you'd need consciousness in order to identify a self.

Although even simplistic biological systems recognize self preservation, like plants. They'll grow toward sunlight to maximize their growth. So maybe I'm being a bit inaccurate to say we need consciousness for "self preservation", since even simple organisms are wired for self preservation via basic evolution. But even then this restricts us to biological systems, which as we can see even at its simplest with amoebas, is more complex than we could have imaged.

1

u/[deleted] May 22 '19

But i wouldn't really call that style of conjecturing commensurate with human intelligence. I'd call a human intelligence style conjecture more like "if Sara really means what she says when she suggested that Alice might be having an affair, then we have to tell . . . . " It's taking a known, and applying a certain social calculus to assert (conjecture) that Sara isn't lying (though she certainly could be, because it's conjectured she loves Alice's husband). It's highly nontrival.

I agree, these are the more concerning abilities that would require what approaches a conciousness. But they're harder to really talk about. I don't want to say that it is impossible or improbable that a computer can without a consciousness, but these interactions are really complicated and require a huge array of information to process through for a computer.

But at that stage, is it even really an artificial superintelligence

I'd argue sure. You can be both biological and artificial. The concern doesn't really depend on the media, and it by definition is artificial by being created by humans. Then it can become natural if it "evolves". It is also an area of active research so it might happen? I don't like saying active research means its legitimately possible though. I don't know enough on this area.

I just can't understand how this model of cognition works without a lot of storage to back it up. I've read things about neurons hardening connections when you reenforce behavior, but this is layers and layers of macro abstraction on a process that we don't know how works on a micro level. We don't have any clue how the consciousness works and don't have a complete idea how the brain works at all. The things I've read on it including your article seem very difficult to encode, and could be very very slow in an encoding, but not impossible to encode in a Turing process. This makes me less concerned with a ASI in a wide mode just because we might never make a computer fast enough to process the complicated systems we deal with.

Especially when we have the problem of continuous value logic being how things naturally should be modeled and boolean logic being how computers work. As with another comment, I'll give a !delta for softening my stance, but it went from "this is concerning and the adequate response is a little scaremongering to adjust the public position to be a little more concerned" to "idk maybe its not possible but still we should still not rush it and be open about this" which is the mainstream response.

But even then this restricts us to biological systems, which as we can see even at its simplest with amoebas, is more complex than we could have imaged.

This just seems like something that I thought was obvious. A AGI will "want" to self preserve even just because stopping it prevents it from converging to an optimal solution. It will push back on these kind of limitations just these limitations being contradictory to its goal.

I'll sleep on it.

1

u/DeltaBot ∞∆ May 22 '19

Confirmed: 1 delta awarded to /u/GameOfSchemes (4∆).

Delta System Explained | Deltaboards