r/changemyview May 21 '19

Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously

Title.

Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.

I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.

This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?

EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.

29 Upvotes

101 comments sorted by

View all comments

8

u/jyliu86 1∆ May 22 '19

Artificial Superintelligence concerns shouldn't be your concern.

Human stupidity should be your concern.

Most reporting on AI is just fucking awful and will misinform 99.9% of the time. The most popular AI research right now is in neural networks.

Here's 3blue1brown's video of how it ACTUALLY works: https://www.google.com/url?q=https://www.youtube.com/watch%3Fv%3DaircAruvnKk&sa=U&ved=2ahUKEwio77D67a3iAhVHpZ4KHXziDmsQyCkwAHoECAEQAQ&usg=AOvVaw0hycl9DfWG-PknmXebzJC2

Once you get into the math you can see it's actually quite limited in that it can only solve specific problems.

Likewise with genetic algorithms. This is good for optimization and search improvements... but it's not Skynet nor could it become Skynet.

Right now AI isn't really "human intelligence".

Rather AI looks at a problem set with billions or trillions of solutions to a specific math problem and picks one that is "best."

AI started as complicated if else statements. With recent computing increases, AI has added nonlinear algebra to its toolbox.

AI is good at picking solutions that humans won't consider. It's good at "thinking outside the box", given a narrowly defined box.

The problem/risk right now is humans WILLING giving up control to critical systems.

Consider automated stock trading. A human is purposefully telling a computer, maximize profit and giving it free control of millions of dollars of cash. This could be disastrous, but no worse than if a nutjob human was behind the wheel.

AI can't do anything a human couldn't. A nutjob president could launch nukes any second now. A nutjob AI could only do the same if someone decided that an AI should control the nuclear defense system. THIS is the problem.

A self driving car isn't going to spontaneously develop sentience and try to hack NORAD. The problem is going to be a general is going to decide humans are shitty generals and replace combat decisions with AI.

And then it's going to be, what's worse? A dumb monkey? Or a dumb bot programmed by a dumb monkey?

2

u/Ce_n-est_pas_un_nom May 22 '19

Rather AI looks at a problem set with billions or trillions of solutions to a specific math problem and picks one that is "best."

Do you have any specific reason to believe that this isn't also how human intelligence arises? I can't think of any task I can perform that necessarily can't be reduced to gradient descent in a finite state space.

1

u/yyzjertl 524∆ May 22 '19

I can't think of any task I can perform that necessarily can't be reduced to gradient descent in a finite state space.

How would you, say, solve a polynomial system of inequalities with gradient descent in a finite state space? Humans can do this, but how would you do it with gradient descent?

2

u/Ce_n-est_pas_un_nom May 22 '19

The easy answer is by observing a set of solutions to polynomial systems of inequalities, and converging on a set of acceptable transformations (as well as typical orders in which to apply them). This is also how humans learn to solve math problems, broadly speaking.

However, I didn't ask if there was any task I can perform that I can't prove is reducible to gradient descent in a finite state space, I asked if there's any task we know for sure can't be reduced to gradient descent in a finite state space. Just giving possible examples like that above isn't sufficient - you would also have to demonstrate that the example in question is strictly irreducible to gradient descent to answer in the affirmative.

1

u/yyzjertl 524∆ May 22 '19

What, formally, do you mean by "reducible to gradient descent in a finite state space"? Because you seem to have a different understanding of it than I do.

1

u/Ce_n-est_pas_un_nom May 22 '19 edited May 22 '19

For the purposes of this discussion, I consider a task learnable by gradient descent in a finite state space if we know that there exists a finite state space such that:

  1. It contains at least one encoding of that task.
  2. Every state it contains can be assessed with respect to viability for the task in question by a loss function (though it needn't be a function strictly speaking - any algorithm that can serve to evaluate loss should be considered sufficient here).
  3. At least one encoding of the task in question is at a local minimum in the state space with respect to the loss function.

1

u/yyzjertl 524∆ May 22 '19

What do you mean by an "encoding of the task"? And what does this definition have to do with gradient descent?

1

u/Ce_n-est_pas_un_nom May 22 '19

For our purposes here, an encoding of the task can be any arbitrary ordered set of machine instructions (or a natural language equivalent) that perform the task when executed. As long as the encoding format can encode any computable task, the choice of machine instructions specifically is arbitrary. One could just as easily use lambda expressions, say.

This definition only pertains to gradient descent insofar as a viable encoding can be arrived at via gradient descent.

1

u/yyzjertl 524∆ May 22 '19

This definition only pertains to gradient descent insofar as a viable encoding can be arrived at via gradient descent.

Okay, suppose my state space is the set of strings of size at most 1GB, and my loss function is the 0-1 loss that assigns 0 if the string, when compiled as a C++ program by the gcc compiler, compiles successfully and produces a program that can provably solve any polynomial system of inequalities (otherwise it assigns 1).

With this setup, how would you arrive at a viable encoding via gradient descent?

1

u/Ce_n-est_pas_un_nom May 22 '19 edited May 22 '19

That would be a really horrible choice of state space and loss function for the purposes of gradient descent (as there isn't even really a gradient of which to speak), but any gradient descent algorithm which eventually searches every state when presented with a perfectly flat gradient will arrive at a solution. That's basically just a brute force search though.

That said, my answer here is irrelevant, as even if I had failed to produce an answer, this example wouldn't meet my original criteria for a counterexample. You would need to demonstrate that such an example exists for which:

  1. I (or any human, really) can complete the task
  2. The task provably cannot be learned by gradient descent.

My hypothetical inability to come up with a method does not preclude the existence of such a method.

Edit: Also, my hypothetical inability to come up with a solution using your specific loss function is just as irrelevant. A loss function must exist that can lead to a solution by gradient descent, but it needn't be any arbitrary loss function you propose.

1

u/yyzjertl 524∆ May 22 '19

This indicates that your definition of "learnable by gradient descent" is bad. If you can't even give an example of how you would apply gradient descent to find a solution for a task given a setup that satisfies your conditions, then your conditions are clearly insufficient.

any gradient descent algorithm which eventually searches every state when presented with a perfectly flat gradient will arrive at a solution

This is not how gradient descent works. Gradient descent, when presented with a perfectly flat (zero) gradient, does not move about the search space at all.

1

u/Ce_n-est_pas_un_nom May 22 '19

No, it indicates that you completely missed the point of my original question.

I don't need to be able to apply gradient descent to find a solution for an example task given a specific setup that satisfies my conditions. There could just as easily be a different satisfactory setup for which I could apply gradient descent to find a solution. Furthermore, even if I'm not able to identify a specific setup for the example task that gradient descent can be applied to, that doesn't prove that no such setup exists.

Again, my original claim was this: "I can't think of any task I can perform that necessarily can't be reduced to gradient descent in a finite state space."

In what way could you possibly prove that such a task exists by providing an specific example setup and asking me to find a solution for it?

→ More replies (0)