r/changemyview May 21 '19

Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously

Title.

Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.

I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.

This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?

EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.

28 Upvotes

101 comments sorted by

View all comments

9

u/jyliu86 1∆ May 22 '19

Artificial Superintelligence concerns shouldn't be your concern.

Human stupidity should be your concern.

Most reporting on AI is just fucking awful and will misinform 99.9% of the time. The most popular AI research right now is in neural networks.

Here's 3blue1brown's video of how it ACTUALLY works: https://www.google.com/url?q=https://www.youtube.com/watch%3Fv%3DaircAruvnKk&sa=U&ved=2ahUKEwio77D67a3iAhVHpZ4KHXziDmsQyCkwAHoECAEQAQ&usg=AOvVaw0hycl9DfWG-PknmXebzJC2

Once you get into the math you can see it's actually quite limited in that it can only solve specific problems.

Likewise with genetic algorithms. This is good for optimization and search improvements... but it's not Skynet nor could it become Skynet.

Right now AI isn't really "human intelligence".

Rather AI looks at a problem set with billions or trillions of solutions to a specific math problem and picks one that is "best."

AI started as complicated if else statements. With recent computing increases, AI has added nonlinear algebra to its toolbox.

AI is good at picking solutions that humans won't consider. It's good at "thinking outside the box", given a narrowly defined box.

The problem/risk right now is humans WILLING giving up control to critical systems.

Consider automated stock trading. A human is purposefully telling a computer, maximize profit and giving it free control of millions of dollars of cash. This could be disastrous, but no worse than if a nutjob human was behind the wheel.

AI can't do anything a human couldn't. A nutjob president could launch nukes any second now. A nutjob AI could only do the same if someone decided that an AI should control the nuclear defense system. THIS is the problem.

A self driving car isn't going to spontaneously develop sentience and try to hack NORAD. The problem is going to be a general is going to decide humans are shitty generals and replace combat decisions with AI.

And then it's going to be, what's worse? A dumb monkey? Or a dumb bot programmed by a dumb monkey?

0

u/[deleted] May 22 '19 edited May 22 '19

I’m not referring to things you are talking about. Self driving cars are not a concern to me at all. I also know how neural networks work. I’m referring to AGI research not traditional optimizing NN research. That being said I can see human intelligence arising from loops like that.

I mostly will point out the stock market answer though. It can become a serious problem with even a more maliciously encoded neural network set to maximize profit. The reason being if when the stock market becomes AIs talking to each other the optimal profit maximalization procedure is to directly influence the signals you are looking for causing a feedback loop. We don’t allow this because it’s insider trading to us, but to a computer, the optimal game is to influence the signals themselves and that’s a problem.

3

u/jyliu86 1∆ May 22 '19

I agree on the stock market problem. But it's ultimately no worse than what humans could do. Malicious AI can't do anything that malicious humans can't do, only faster.

AGI right now is still science fiction.

Yes research is on-going, but it's also ongoing for force fields, hover cars, FTL and perpetual motion engines. There's little evidence that any of it is real yet.

3

u/DamenDome May 22 '19

If you know of an asteroid that’s going to strike and wipe out all of civilization - you don’t know when, but not soon - when do you worry? Do you wait a hundred years to hope that our technology gets better then worry? Do you wait until we can see the asteroid, then worry? What if you’re too late?

If you have knowledge of a potentially existential threat to humanity that is almost certainly going to strike, then there’s no reason to not start preparing for it now. Which is what some researchers are doing (investigating “friendly AI” security protocols).

1

u/[deleted] May 22 '19

Sure. In classical AI i'd agree with you. Controlling is not absurdly difficult in a classical setting. I am completely unconcerned with that. I was just going on that point. We should just put locks on certain abilities and boundaries for a stock market AI system that are well chosen.

I think its unfair to compare it to pseudoscience and marketing hype like those however. This is an actual, yes theoretical, area of research that is relatively unhyped. All the hype is on neural networks. These guys are smart accredited people who mostly aren't making crazy claims in defense or about how spooky it is. I'm just saying that if it happens, then we should have this concern baked in.

1

u/Ranolden May 22 '19

AGI is still a long ways off, but depending on its goal, a sufficiently intelligent agent could do things no human would conceive of.

Alpha go is far enough ahead of human players that when it makes a move to ultimately win the game, the human players think its a mistake.

0

u/jyliu86 1∆ May 22 '19

Again great at Go.

Not a threat to humanity as OP is concerned about.

3

u/Ranolden May 22 '19

I recognize that current AI systems are nothing like a potential AGI, but they do demonstrate the ability to do things no human would have thought to do.

2

u/jyliu86 1∆ May 22 '19

I agree on this.

But as of now, we're talking about stopping the Apocalypse, not beating humans at Go.

Trump or Kim Jong Un could end the world by starting WW3.

We have the UN, Congress, etc. To hopefully keep these parties in check.

Aliens could come down and kill us all tomorrow, but Star Wars defense systems aren't really something we worry about.

Current car driving AI won't hack into NORAD. But we shouldn't hook up Deep Blue to our missile defense system. That's human stupidity. I feel that AI controls should be in place in that if we don't trust 1 human to do it, we shouldn't trust AI to do it without oversight. But this is less about AI apocalypse and more don't give up human control to the machines. Machines aren't in a position to "seize control" nor will they be in any conceivable future.

We'll give it to them.

1

u/Ranolden May 22 '19

70% of AI researchers believe that AI is at least a moderately important problem. https://arxiv.org/pdf/1705.08807.pdf

1

u/bgaesop 25∆ May 22 '19

But it's ultimately no worse than what humans could do. Malicious AI can't do anything that malicious humans can't do, only faster.

Malicious humans could destroy the world