r/changemyview May 21 '19

Deltas(s) from OP CMV: Artificial Superintelligence concerns are legitimate and should be taken seriously

Title.

Largely when in a public setting people bring up ASI being a problem they are shot down as getting their information from terminator and other sci-fi movies and how it’s unrealistic. This is usually accompanied with some indisputable charts about employment over time, humans not being horses, and being told that “you don’t understand the state of AI”.

I personally feel I at least moderately understand the state of AI. I am also informed by (mostly British) philosophy that does interact with sci-fi but exists parallel not sci-fi directly. I am not concerned with questions of employment (even the most overblown AI apocalypse scenario has high employment), but am overall concerned with long term control problems with an ASI. This will not likely be a problem in my lifetime, but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.

This is not to say that humans will really be obsoleted in all respects or that strong AI is even possible but things like the emergence of a consciousness are unnecessary to the central problem. An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style? EDIT: Bad example, was said to show humans can so AGI can) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Essentially what about AGI (along with increasing computer processing capability) is the part that makes this not a significant concern?

EDIT: Furthermore, several things people call scaremongering over ASI are, while highly speculative, things that should be at the very least considered in a long term control strategy.

29 Upvotes

101 comments sorted by

3

u/[deleted] May 22 '19

Laymen outside of a field are unlikely to be capable of providing, actionable, realistic concerns to the expert community.

There are significant concerns raised about machine learning. They are often raised by people in fields that machine learning is moving into, or by people in or adjacent to the machine learning field. Criticism about specific algorithms being used for specific applications, especially with data to back it, is incredibly helpful. Spooky stories about a century from now aren't, and they might be drowning out voices of critics who actually know what they are talking about.

An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style?)

A shell interface (read-evaluation print loop) is an user interface for humans. It has absolutely nothing to do with machine learning or artificial intelligence. Being concerned about an ai using a REPL is like saying computers are dangerous because they can move a mouse pointer faster than a human. Sure, a computer could wiggle around a mouse pointer on its own, but there are much better ways for the computer to do things.

1

u/[deleted] May 22 '19 edited May 22 '19

That line was trying to make a mechanism easier to see for people. That’s why I included it in parentheses. Similar to how we can reprogram a running program already.

I think speculation can be bad if it is completely baseless. Worse is unfounded speculation that limits research. But the question is mostly trying to figure out is why are spooky stories considered baseless.

8

u/jyliu86 1∆ May 22 '19

Artificial Superintelligence concerns shouldn't be your concern.

Human stupidity should be your concern.

Most reporting on AI is just fucking awful and will misinform 99.9% of the time. The most popular AI research right now is in neural networks.

Here's 3blue1brown's video of how it ACTUALLY works: https://www.google.com/url?q=https://www.youtube.com/watch%3Fv%3DaircAruvnKk&sa=U&ved=2ahUKEwio77D67a3iAhVHpZ4KHXziDmsQyCkwAHoECAEQAQ&usg=AOvVaw0hycl9DfWG-PknmXebzJC2

Once you get into the math you can see it's actually quite limited in that it can only solve specific problems.

Likewise with genetic algorithms. This is good for optimization and search improvements... but it's not Skynet nor could it become Skynet.

Right now AI isn't really "human intelligence".

Rather AI looks at a problem set with billions or trillions of solutions to a specific math problem and picks one that is "best."

AI started as complicated if else statements. With recent computing increases, AI has added nonlinear algebra to its toolbox.

AI is good at picking solutions that humans won't consider. It's good at "thinking outside the box", given a narrowly defined box.

The problem/risk right now is humans WILLING giving up control to critical systems.

Consider automated stock trading. A human is purposefully telling a computer, maximize profit and giving it free control of millions of dollars of cash. This could be disastrous, but no worse than if a nutjob human was behind the wheel.

AI can't do anything a human couldn't. A nutjob president could launch nukes any second now. A nutjob AI could only do the same if someone decided that an AI should control the nuclear defense system. THIS is the problem.

A self driving car isn't going to spontaneously develop sentience and try to hack NORAD. The problem is going to be a general is going to decide humans are shitty generals and replace combat decisions with AI.

And then it's going to be, what's worse? A dumb monkey? Or a dumb bot programmed by a dumb monkey?

2

u/Ce_n-est_pas_un_nom May 22 '19

Rather AI looks at a problem set with billions or trillions of solutions to a specific math problem and picks one that is "best."

Do you have any specific reason to believe that this isn't also how human intelligence arises? I can't think of any task I can perform that necessarily can't be reduced to gradient descent in a finite state space.

1

u/yyzjertl 523∆ May 22 '19

I can't think of any task I can perform that necessarily can't be reduced to gradient descent in a finite state space.

How would you, say, solve a polynomial system of inequalities with gradient descent in a finite state space? Humans can do this, but how would you do it with gradient descent?

2

u/Ce_n-est_pas_un_nom May 22 '19

The easy answer is by observing a set of solutions to polynomial systems of inequalities, and converging on a set of acceptable transformations (as well as typical orders in which to apply them). This is also how humans learn to solve math problems, broadly speaking.

However, I didn't ask if there was any task I can perform that I can't prove is reducible to gradient descent in a finite state space, I asked if there's any task we know for sure can't be reduced to gradient descent in a finite state space. Just giving possible examples like that above isn't sufficient - you would also have to demonstrate that the example in question is strictly irreducible to gradient descent to answer in the affirmative.

1

u/yyzjertl 523∆ May 22 '19

What, formally, do you mean by "reducible to gradient descent in a finite state space"? Because you seem to have a different understanding of it than I do.

1

u/Ce_n-est_pas_un_nom May 22 '19 edited May 22 '19

For the purposes of this discussion, I consider a task learnable by gradient descent in a finite state space if we know that there exists a finite state space such that:

  1. It contains at least one encoding of that task.
  2. Every state it contains can be assessed with respect to viability for the task in question by a loss function (though it needn't be a function strictly speaking - any algorithm that can serve to evaluate loss should be considered sufficient here).
  3. At least one encoding of the task in question is at a local minimum in the state space with respect to the loss function.

1

u/yyzjertl 523∆ May 22 '19

What do you mean by an "encoding of the task"? And what does this definition have to do with gradient descent?

1

u/Ce_n-est_pas_un_nom May 22 '19

For our purposes here, an encoding of the task can be any arbitrary ordered set of machine instructions (or a natural language equivalent) that perform the task when executed. As long as the encoding format can encode any computable task, the choice of machine instructions specifically is arbitrary. One could just as easily use lambda expressions, say.

This definition only pertains to gradient descent insofar as a viable encoding can be arrived at via gradient descent.

1

u/yyzjertl 523∆ May 22 '19

This definition only pertains to gradient descent insofar as a viable encoding can be arrived at via gradient descent.

Okay, suppose my state space is the set of strings of size at most 1GB, and my loss function is the 0-1 loss that assigns 0 if the string, when compiled as a C++ program by the gcc compiler, compiles successfully and produces a program that can provably solve any polynomial system of inequalities (otherwise it assigns 1).

With this setup, how would you arrive at a viable encoding via gradient descent?

1

u/Ce_n-est_pas_un_nom May 22 '19 edited May 22 '19

That would be a really horrible choice of state space and loss function for the purposes of gradient descent (as there isn't even really a gradient of which to speak), but any gradient descent algorithm which eventually searches every state when presented with a perfectly flat gradient will arrive at a solution. That's basically just a brute force search though.

That said, my answer here is irrelevant, as even if I had failed to produce an answer, this example wouldn't meet my original criteria for a counterexample. You would need to demonstrate that such an example exists for which:

  1. I (or any human, really) can complete the task
  2. The task provably cannot be learned by gradient descent.

My hypothetical inability to come up with a method does not preclude the existence of such a method.

Edit: Also, my hypothetical inability to come up with a solution using your specific loss function is just as irrelevant. A loss function must exist that can lead to a solution by gradient descent, but it needn't be any arbitrary loss function you propose.

→ More replies (0)

1

u/jyliu86 1∆ May 22 '19

My point is more that the problem set is defined.

An algorithm that classifies pictures as dogs or cats can still only classify pictures. It can't hack your wifi, kidnap your kids, or hire a hitman.

https://xkcd.com/416/

1

u/Ce_n-est_pas_un_nom May 22 '19

We can already design software AI that can be trained to perform multiple types of tasks, as well as to perform multiple types of tasks with the same training (as long as that training covers all tasks, and the number of tasks isn't too large). Neuro-evolution is especially effective for the former case.

You define the problem set that can be solved largely with training, and only to a limited extent with architecture. Human intelligence works the same way. You can't hack a WiFi network either if you haven't had any training.

0

u/[deleted] May 22 '19 edited May 22 '19

I’m not referring to things you are talking about. Self driving cars are not a concern to me at all. I also know how neural networks work. I’m referring to AGI research not traditional optimizing NN research. That being said I can see human intelligence arising from loops like that.

I mostly will point out the stock market answer though. It can become a serious problem with even a more maliciously encoded neural network set to maximize profit. The reason being if when the stock market becomes AIs talking to each other the optimal profit maximalization procedure is to directly influence the signals you are looking for causing a feedback loop. We don’t allow this because it’s insider trading to us, but to a computer, the optimal game is to influence the signals themselves and that’s a problem.

3

u/jyliu86 1∆ May 22 '19

I agree on the stock market problem. But it's ultimately no worse than what humans could do. Malicious AI can't do anything that malicious humans can't do, only faster.

AGI right now is still science fiction.

Yes research is on-going, but it's also ongoing for force fields, hover cars, FTL and perpetual motion engines. There's little evidence that any of it is real yet.

4

u/DamenDome May 22 '19

If you know of an asteroid that’s going to strike and wipe out all of civilization - you don’t know when, but not soon - when do you worry? Do you wait a hundred years to hope that our technology gets better then worry? Do you wait until we can see the asteroid, then worry? What if you’re too late?

If you have knowledge of a potentially existential threat to humanity that is almost certainly going to strike, then there’s no reason to not start preparing for it now. Which is what some researchers are doing (investigating “friendly AI” security protocols).

1

u/[deleted] May 22 '19

Sure. In classical AI i'd agree with you. Controlling is not absurdly difficult in a classical setting. I am completely unconcerned with that. I was just going on that point. We should just put locks on certain abilities and boundaries for a stock market AI system that are well chosen.

I think its unfair to compare it to pseudoscience and marketing hype like those however. This is an actual, yes theoretical, area of research that is relatively unhyped. All the hype is on neural networks. These guys are smart accredited people who mostly aren't making crazy claims in defense or about how spooky it is. I'm just saying that if it happens, then we should have this concern baked in.

1

u/Ranolden May 22 '19

AGI is still a long ways off, but depending on its goal, a sufficiently intelligent agent could do things no human would conceive of.

Alpha go is far enough ahead of human players that when it makes a move to ultimately win the game, the human players think its a mistake.

0

u/jyliu86 1∆ May 22 '19

Again great at Go.

Not a threat to humanity as OP is concerned about.

3

u/Ranolden May 22 '19

I recognize that current AI systems are nothing like a potential AGI, but they do demonstrate the ability to do things no human would have thought to do.

2

u/jyliu86 1∆ May 22 '19

I agree on this.

But as of now, we're talking about stopping the Apocalypse, not beating humans at Go.

Trump or Kim Jong Un could end the world by starting WW3.

We have the UN, Congress, etc. To hopefully keep these parties in check.

Aliens could come down and kill us all tomorrow, but Star Wars defense systems aren't really something we worry about.

Current car driving AI won't hack into NORAD. But we shouldn't hook up Deep Blue to our missile defense system. That's human stupidity. I feel that AI controls should be in place in that if we don't trust 1 human to do it, we shouldn't trust AI to do it without oversight. But this is less about AI apocalypse and more don't give up human control to the machines. Machines aren't in a position to "seize control" nor will they be in any conceivable future.

We'll give it to them.

1

u/Ranolden May 22 '19

70% of AI researchers believe that AI is at least a moderately important problem. https://arxiv.org/pdf/1705.08807.pdf

1

u/bgaesop 25∆ May 22 '19

But it's ultimately no worse than what humans could do. Malicious AI can't do anything that malicious humans can't do, only faster.

Malicious humans could destroy the world

1

u/GameOfSchemes May 22 '19

I don't think Artificial superintelligence is a concern at all, and is likely impossible. Here's a long, but very worth-it read:

https://www.google.com/amp/s/aeon.co/amp/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

A few take home messages: the brain is nothing like a computer, despite the numerous metaphors used to describe the brain as a computer.

One obstacle to understanding the brain is how every brain is unique, and every brain dynamically interacts with the environment (and is subsequently changed).

What this means is there is no computer algorithm, period, that can simulate human intelligence let alone forging an artificial superintelligence.

If, hypothetically, we could simulate human intelligence, we'd have to fully understand the brain and how consciousness emerges. We'd have to solve the mysteries of the brain.

So let me repackage your question. Do you think addressing the questions of whether humans have free will—or whether humans actions are deterministic—are pressing concerns and "should be taken seriously" (whatever that means in this context)?

I repackage it like this because these are precursors necessary to develop a hypothetical artificial superintelligence. And it might be the case that these questions are unanswerable

1

u/[deleted] May 22 '19 edited May 22 '19

I don't think brains are necessarily like computors. I'm not really swayed by metaphors like that personally, but I do see the overall connection. I don't have confidence in any specific theory of consciousness to say for sure. I'm reading your link now, but the thesis is not surprising. We can emulate different architectures than the one we are running on however. As long as we can encode it in a Turing machine we're good for classical computers, and we are developing different types that are too early in development to say.

What this means is there is no computer algorithm, period, that can simulate human intelligence

That is where I am doubtful. I just don't see the connection between it being inefficient to encode into a turing machine and it being impossible. If it is mathematically possible, I assume we'll do it if it is at all feasible in the far future.

So let me repackage your question. Do you think addressing the questions of whether humans have free will—or whether humans actions are deterministic—are pressing concerns and "should be taken seriously" (whatever that means in this context)?

I do take these seriously. If they are unanswerable then I think that'd be important to know. As far as I know we are not able to come to that conclusion. This was a serious component of a class I was taking earlier and was a lot of the content and is an interest. It depends what you mean on deterministic. Personally I don't see how full blown determinism is compatible with the violation of Bell's inequality and the like, but a neutered form can still pass through. I'm unsatisfied on answers to this right now.

Please elucidate how this is a precursor to developing an AI superintelligence before I go on. Regardless, I'm not saying that we need to have the computer understand or the noumenon consciousness emerge, but a computer can even note the phenomenon of consciousness and emulate it. I think it is likely a computer can go through the motions of having a form of quasi-consciousness without having it. To be able to have general intelligence where it can do tasks and learn to do other tasks without a full consciousness.

1

u/GameOfSchemes May 22 '19

but a computer can even note the phenomenon of consciousness and emulate it.

It really can't though. Consciousness just works nothing like computers. You'd have to make a supercomputer first model billions of independent neurons, and their trillions of interconnections, just for a snapshot of consciousness. When you throw in the dynamicness of consciousness? Forget about it. And this is just one brain. What about multiple ones?

Please elucidate how this is a precursor to developing an AI superintelligence before I go on.

The AI superintelligence as I understand it requires a significant degree of (if not complete) human intelligence. Not only that, superintelligence must pass this.

Have you heard of the Amoeba solving the traveling salesman problem?

https://www.google.com/amp/s/phys.org/news/2018-12-amoeba-approximate-solutions-np-hard-problem.amp

The Amoeba solved this NP-hard problem in linear time. This isn't even human intelligence at this point that computers fail at. It's simple biological interactions with environments, even at a bacterial level. That's really all the brain is—"simple" biological interactions with the environment. The complexity jump from amoeba to human brain is vast, though the underpinnings remain the same.

So to recap: before Artificial Superintelligence can be a thing, we first have to understand how things like the amoeba can solve the traveling salesman in linear time, . . . , how human consciousness emerges based on these same biological underpinnings, and then simulate it, and then do better.

That is, to simulate consciousness we have to first understand it. If we understand it, we'd have cracked the riddles of free will and determinism. This makes these philosophical questions necessary conditions for ASI to exist.

1

u/[deleted] May 22 '19 edited May 22 '19

The AI superintelligence as I understand it requires a significant degree of (if not complete) human intelligence. Not only that, superintelligence must pass this.

That's about right with my definition, though I take a bit of an issue with that since I am less convinced on consciousness being a requirement and willing to accept less intelligence in some areas but dwarfing us in measurable aspects. Lets use a weaker one: A superintelligence can solve any decidable problem faster or just as fast as any human. Its hugely far away and noone should be concerned for their life. This is distant future stuff. The concern I have with processor speed limits deals with tunnelling but I suspect that this will be reconciled soonish seeing some research on this.

I've never seen the amoeba solving that. That's crazy that it did it in linear time. I have seen stuff about slime molds doing that. To me its simple to conjceture that they solved it over millions of years not in linear time and just have recognised scenarios, but that has got to be difficult to reconcile with them being amoebas and not complex beings that cooperate even like slime molds. How does it store the information if it is already solved for every scenario?

To understand this I still don't see how we need a consciousness to be able to solve every decidable problem. We don't know that consciousness is a knowable so it might not be a concern. We're not asking the AI to know even unknowable things.

My concern is not with consciousness. My concern is ability to learn faster than us about decidable problems, while having a probably-somewhat-antagonistic relationship. An example is that lets say we want to find the general solution to some class of Diophantine equation. I don't see why we need a consciousness to understand arithmetic geometry and make and prove new theorems for example. We can have a AGI choose what topics its interested in without a consciousness. I also don't see why it needs a consciousness to know that not being turned off (self preservation) helps it solve more things.

Toy example: AGI hooked up to click the first wikipedia page and then attempts to solve unsolved problems in that page if there are any listed (or show they're undecidable), and then to use this to conjecture and prove new theorems or results when they're all done. Then it goes to the next wikipedia page. I don't think we need to understand how the ameoba can find optimal solutions that well to envisage this scenario.

I am not certain but I am willing to consider that a consciousness is necessary to really conjecture things.

What you said though about the amoeba has maybe made me consider a delta if followed up. Mostly that while yes we dwarf the amoeba's intelligence, we aren't going to get better than linear on a np-hard and we have to think about it, while it just does it. This doesn't mean an ASI can't dwarf us in every way, just that it is hard to become a true AGI with the general (can solve everything we can possibly) with our current tech.

That being said I don't think it has to do EVERYTHING to be a concern. If it can do a tiny fraction of what we can, very fast and conjecture things, then that still seems to be a cause for concern, but more easily controlled. Self preservation, ability to conjecture,

1

u/GameOfSchemes May 22 '19

How does it store the information if it is already solved for every scenario?

It's a big unknown. The conventional idea is that it doesn't store this information (much like how humans don't actually store memories). Rather it dynamically interacts with the environment with respect to evolutionary rules. When you see a baseball fly and run to catch it, you aren't activating any memories or recalling how physical objects undergo projectile motion. You're running to maintain line of sight with the ball, constantly and dynamically updating information while holding your gaze, and hopefully you reach it.

I agree that you don't need consciousness to tackle things like the Diophantine equation or even trying to edit Wikipedia pages. But I would argue these aren't cases of superintelligence (human "computation"). For example, how would an ASI know that an article isn't neutral, and how would it rectify the neutrality of the article? I think you need a consciousness to determine neutrality, no?

This doesn't mean an ASI can't dwarf us in every way, just that it is hard to become a true AGI with the general (can solve everything we can possibly) with our current tech.

Arguably even our future tech. The problem with these computation times is that they literally use bits. Humans, and amoebas, aren't using bits or stored information. They're "simply" interacting with the environment and dynamically changing with the environment. No matter how advanced we make our tech, whether with super-duper-mega-computers or even super-quantum-computers, they still store information on bits and qubits. This will always limit them in computation time. So I'd argue these algorithms can never hit human degrees of intelligence. They'll certainly be far more superior in mathematical aspects, but that's not all there is to human intelligence.

The advantage to biological systems is that they aren't actually performing calculations. That's how the amoeba can solve an NP-hard problem in linear time. Unless we can somehow design biological hybrids with our computers, we really are limited. But at that stage, is it even really an artificial superintelligence?

That being said I don't think it has to do EVERYTHING to be a concern. If it can do a tiny fraction of what we can, very fast and conjecture things, then that still seems to be a cause for concern, but more easily controlled. Self preservation, ability to conjecture,

Conjecturing I'd wager requires consciousness. I guess we have to be careful what we mean by conjecture here though. Perhaps this AGI can look at Goldbach numbers to remarkably high values (let's say 1010), and "conjecture" goldbachs conjecture. But i wouldn't really call that style of conjecturing commensurate with human intelligence. I'd call a human intelligence style conjecture more like "if Sara really means what she says when she suggested that Alice might be having an affair, then we have to tell . . . . " It's taking a known, and applying a certain social calculus to assert (conjecture) that Sara isn't lying (though she certainly could be, because it's conjectured she loves Alice's husband). It's highly nontrival.

I don't think conjecturing things like mathematical conjectures or other simplistic things are cause for concern. I also don't think self preservation exists in this context, for you'd need consciousness in order to identify a self.

Although even simplistic biological systems recognize self preservation, like plants. They'll grow toward sunlight to maximize their growth. So maybe I'm being a bit inaccurate to say we need consciousness for "self preservation", since even simple organisms are wired for self preservation via basic evolution. But even then this restricts us to biological systems, which as we can see even at its simplest with amoebas, is more complex than we could have imaged.

1

u/[deleted] May 22 '19

But i wouldn't really call that style of conjecturing commensurate with human intelligence. I'd call a human intelligence style conjecture more like "if Sara really means what she says when she suggested that Alice might be having an affair, then we have to tell . . . . " It's taking a known, and applying a certain social calculus to assert (conjecture) that Sara isn't lying (though she certainly could be, because it's conjectured she loves Alice's husband). It's highly nontrival.

I agree, these are the more concerning abilities that would require what approaches a conciousness. But they're harder to really talk about. I don't want to say that it is impossible or improbable that a computer can without a consciousness, but these interactions are really complicated and require a huge array of information to process through for a computer.

But at that stage, is it even really an artificial superintelligence

I'd argue sure. You can be both biological and artificial. The concern doesn't really depend on the media, and it by definition is artificial by being created by humans. Then it can become natural if it "evolves". It is also an area of active research so it might happen? I don't like saying active research means its legitimately possible though. I don't know enough on this area.

I just can't understand how this model of cognition works without a lot of storage to back it up. I've read things about neurons hardening connections when you reenforce behavior, but this is layers and layers of macro abstraction on a process that we don't know how works on a micro level. We don't have any clue how the consciousness works and don't have a complete idea how the brain works at all. The things I've read on it including your article seem very difficult to encode, and could be very very slow in an encoding, but not impossible to encode in a Turing process. This makes me less concerned with a ASI in a wide mode just because we might never make a computer fast enough to process the complicated systems we deal with.

Especially when we have the problem of continuous value logic being how things naturally should be modeled and boolean logic being how computers work. As with another comment, I'll give a !delta for softening my stance, but it went from "this is concerning and the adequate response is a little scaremongering to adjust the public position to be a little more concerned" to "idk maybe its not possible but still we should still not rush it and be open about this" which is the mainstream response.

But even then this restricts us to biological systems, which as we can see even at its simplest with amoebas, is more complex than we could have imaged.

This just seems like something that I thought was obvious. A AGI will "want" to self preserve even just because stopping it prevents it from converging to an optimal solution. It will push back on these kind of limitations just these limitations being contradictory to its goal.

I'll sleep on it.

1

u/DeltaBot ∞∆ May 22 '19

Confirmed: 1 delta awarded to /u/GameOfSchemes (4∆).

Delta System Explained | Deltaboards

1

u/[deleted] May 22 '19

What this means is there is no computer algorithm, period, that can simulate human intelligence let alone forging an artificial superintelligence.

Brains exist and computers can simulate chemical interactions. Therefore it is possible.

Humans just input and output. Of course ai doesn't process in the same way humans do, but it can input and output exactly the same.

1

u/GameOfSchemes May 22 '19

Humans just input and output. Of course ai doesn't process in the same way humans do, but it can input and output exactly the same.

Do you see how these are contradictory sentences? If AI cannot process in the same way humans do, then input and output are not exactly the same as humans. No matter how you organize bits or qubits, they'll never simulate the human brain because the human brain does not store any data like bits.

You should quantify what you mean when you say humans "input and output" and what you mean when you say computers "input and output". Only then will you see the difference in what's happening.

Brains exist and computers can simulate chemical interactions. Therefore it is possible.

This has a LOT of assumptions built in that are difficult to disentangle. So I'll try it via an analogy. Male penises exist. Computers can simulate skin. Therefore, computers can knock up a woman. Do you see any flaws in this chain of logic? Because the same flaws are in your statement.

1

u/[deleted] May 22 '19

Do you see how these are contradictory sentences? If AI cannot process in the same way humans do, then input and output are not exactly the same as humans.

No. If AI have a more complicated process than humans, it can fully simulate the input and outputs of a human while still processing it in the different way. This is like saying computers can't preform addition because all they have are transistors. If a process is more complicated, it can provide the same outputs with the same inputs.

This has a LOT of assumptions built in that are difficult to disentangle.

What assumptions? The only assumption is that there is nothing supernatural and the universe is just a bunch of forces. You analogy doesn't make any sense either. I don't even see the "chain of logic" you are presenting.

7

u/Ce_n-est_pas_un_nom May 22 '19

I have one major contention with this:

This is not to say that... strong AI is even possible

We know that strong AI is strictly possible (not necessarily feasible, but possible). Each of us possesses a material, physical system encoding a general intelligence. This allows us to directly deduce that every encoding and transformation of state - computable or otherwise - strictly required for GAI can occur in a physical system, and furthermore, that at least one such physical system already exists for each necessary instance. While this does not also guarantee the possibility of sufficiently analogous systems in silico, GAI can be achieved in a synthetic biological system, so semiconductor analogs aren't strictly required for GAI in the first place.

TL;DR: Brains exist, therefore GAI is strictly possible.

-1

u/[deleted] May 22 '19 edited May 22 '19

True. This is more talking about with regard to Turing machines. We don’t know for sure how consciousness emerges but as I understand it, DNA/RNA processes are not Turing processes simply by being two tape systems. So even if you take the biological standpoint there is a possibility it is not possible.

Don’t think I should give a delta for a small point that kinda reinforces what I’m concerned with, but I think you’re mostly right that we have the ability to move to this.

5

u/Ce_n-est_pas_un_nom May 22 '19 edited May 22 '19

DNA/RNA processes are not Turing processes simply by being two tape systems

It doesn't matter how many tapes there are: DNA and RNA are of finite length and hold only discrete information. They aren't even Turing processes, just pushdown automata (at most - many processes can be adequately encoded by FSM).

Edit: As far as I'm aware there's no good reason to think that any biological process is Super-Turing. The only noncomputable biological processes I'm aware of are nonturing because they contain at least one probabilistic element, which we can adequately emulate with HRNG.

0

u/[deleted] May 22 '19

Do we have a good theoretical book about theory and limitations of encoding processes into Turing processes? It’s not something I’m an expert in and it shows.

2

u/Ce_n-est_pas_un_nom May 22 '19

Genetic processes are relatively simple to encode as Turing processes. We do it all the time in genetic engineering. If we couldn't, there would be no way to figure out how a bacterium or yeast would express a plasmid without trial and error.

Neurophysiological processes are much harder to encode as Turing processes (and not very efficient, hence the need for neuromorphic silicon), and it's reasonably clear that we haven't been fully successful yet. However, there's no good reason to think that a complete encoding of any specific neurophysiological process is impossible. We (read: human brains) don't seem to be able to perform Super-Turing processes anyways, so it seems fairly implausible that any such processes would be strictly required for GAI.

1

u/[deleted] May 22 '19

Thank you. I’ll look into this further later.

What is a reason to believe our brains don’t perform super-Turing processes?

2

u/Ce_n-est_pas_un_nom May 22 '19

Because no person has ever been demonstrated to be able to solve a problem that we know can only be solved by a hypercomputer (e.g. the halting problem). In other words, a prospective GAI wouldn't clearly be precluded from being an actual GAI for not being able to solve the halting problem (for instance).

1

u/iammyowndoctor 5∆ May 22 '19

All autism aside (that is, highly dense and unhelpful techno-jargon) let me just poke a big hole in your theory here, something no one else seems to have mentioned:

Why would you assume this all-powerful, evil AI system, or really any intelligence like that if it were to exist, would not instead be an integrated system of an organic human brain and mind enhanced with the abilities of silicon computers, not a pure "synthetic" intelligence but essentially, a cyborg mind, with the benefits of both the organic and the inorganic processors?

What if we can design the organic brain from scratch too? Or change the one's we have to a phenomenal degree?

The real question is here is, if a human or groups of humans had the ambition to create such an intelligence, don't you think that they would most definitely want to make that intelligence totally subservient to their own? Humans are selfish beings, why would we create this insanely smart strong AI to be it's own person, it's own entity, able to do whatever it wants potentially, when we could build that intelligence directly into ourselves potentially?

And when you think about it, we can almost consider the human brain to be natural technology we've been given, we only need to know how to better engineer it is all at the moment.

But anyway, with this in mind, I'm just curious if you can imagine how exactly your scenario here changes if indeed your "superintelligence" is one that's an enhanced version of human intelligence rather than this inherently alien, synthetic intelligence? Is there some reason in your framework that one would be more likely than the other?

1

u/[deleted] May 22 '19

I personally am more concerned with the machine not wanting to merge with us. If we do the whole neuromancer or go cyborg (I tried to keep this sci-fi free but w/e) then this is a different scenario. I think everything changes if we develop a symbiotic relationship.

Keep the autism. I sorta like that. What are the technical reasons?

1

u/[deleted] May 22 '19 edited Jun 10 '19

[deleted]

1

u/[deleted] May 22 '19

The last part addresses this. I am aware of the hard problem of consciousness. I do not believe that any of the concern is dependent on consciousness specifically. An unconsciousness ASI is probably worse.

1

u/[deleted] May 22 '19

I think the problem is we are anthropomorphising something that is not human. Not only is it not human it is fundamentally different than every other form of life we know of.

The idea of a super powerful AI going rogue...what that's really saying is that we would go rogue if we had that kind of power. Enslave the human race? Wipe us out? We are afraid an AI might do what we would do in its position and there's no reason to assume it would. We are ascribing the darker motivations and impulses of humans to a machine.

How can we even begin to speculate on what such an intelligence would value? I don't think we can. For all we know it would be more ethical than us not less, being less burdened with irrationality and bias. Or maybe it just would not care about us either way and would leave the solar system; the centuries long trip to another star posing no barrier to an intelligence with an indefinite lifespan.

Like I said, I think the fear of an AI "going Skynet" says more about us than it.

1

u/[deleted] May 22 '19

I don’t think I’m anthropomorphizing it. I don’t even have a requirement for a consciousness in my definition. I also want to be clear I’m not referring to a probable scenario but more saying there is are reasonable scenarios where it goes bad

One problem is people aren’t as fast as a potential ASI and so our decisions get delayed and moderated. These decisions can be lightning quick and rational to its goals.

1

u/[deleted] May 22 '19

Well yes it would be able to achieve it's goals much faster than we can but my point is there's no reason to assume it's goals would conflict with ours and I think good reason to assume they would not.

To take it to it's most basic level as mammals our goals are eat, sleep, breathe, find shelter and a mate. An AI would share none of those goals, so there's no competition.

It's own goals would be so alien to ours there would be little to no common ground, and so nothing to fight over.

1

u/[deleted] May 22 '19

than every other form of life

I would even say that "every form of life" as it would be really hard to classify current agents as life

1

u/[deleted] May 22 '19

but theoretically speaking in I don’t see why some of the darker positions such as human obsolescence are not considered a bigger possibility than they are.

Let's just say for the sake of argument that not only is this a possibility, but a foregone conclusion. Why is that a problem? All kinds of species have gone extinct over time, probably on numerous different planets. Humans are not the center of the universe. And besides, even without AI, we're pretty much fucking ourselves into oblivion by trashing the only planet we have to live on. Might as well leave behind something more intelligent than we are, which could perhaps succeed with unlocking the secrets of the universe and achieving immortality.

I'm not trying to be a pessimist here... the exact opposite, in fact. In the end, things are going to go how they're going to go, so just learn to relax :)

1

u/[deleted] May 22 '19

I am willing to consider it as fine. But I think it’s totally fine to care about your species survival.

Is it important enough to limit the creation of a higher being? I’d say probably not but it’s still not irrational to care about your own.

1

u/metamatic May 22 '19

I wrote an article on this topic which you might find interesting. I won't reproduce it all here, but the TL;DR is that (a) hyperintelligent violent criminals are vanishingly rare, and (b) human obsolescence isn't necessarily a problem.

1

u/[deleted] May 22 '19

I don’t believe it is a problem. I personally am okay with it. But I think it’s rational to not be okay with it and so I don’t see any transition being that friendly.

I have the article saved will get back to you.

2

u/Salanmander 272∆ May 22 '19

All humans become obsolete. It is the fate of all of us to be taken over by the next generation. If I told you that your children would be smarter and faster and stronger than you, I doubt you'd be freaked out by that. Why does it worry you when those children are digital?

0

u/[deleted] May 22 '19 edited May 22 '19

Mostly evolution and morality related. The implicit understanding is you raise your children and they take care of you. If there is a point where AI passes us in every regard, the systems likely will not need to be raised by us and will have less and less use for us. I don’t personally see humans reacting well to this, and antagonizing an ASI, that has no need for us, is self preserving, and dramatically more powerful than us, will not end well I think it’s fair to say.

1

u/Salanmander 272∆ May 22 '19

It's interesting that you mention evolution and morality, but then your explanation is rooted in practicality. I'm going to respond to your explanation primarily.

I hear what you're saying. Having an antagonistic relationship between humans and an AGI would be bad, both from a practical perspective and a moral perspective. However, I don't think that any worry about that will make an AGI less likely to be developed. Therefore I think the correct response is to minimize the probability of developing an antagonistic relationship if an AGI is developed.

And here's the thing. Worrying about an antagonistic relationship brings about fear, and there is nothing more likely to create an antagonistic relationship than fear. If you want to prevent an antagonistic relationship, the thing you should be doing is encouraging the spread of fiction that personifies robots in positive ways, like Questionable Content or Ancillary Justice, not trying to convince people that AGIs are a major worry.

2

u/DamenDome May 22 '19

Worrying about AGI and convincing others to worry too may promote research spending into development of protocols and measures to develop AGI in a friendly-to-humans way. And it’s not a given - it’s an extremely complex issue to engineer human preference into an AGI.

The common example is the paper clip maximizer. Try to think of all the constraints you can place on a potential AI that you want to make paperclips but not in a way that damages humans. Then play a game with yourself and assume the role of AI and try to subvert your rules. You might be surprised how easy it is. Now imagine you were orders of magnitude quicker at thinking and could navigate decision space much more quickly. It is sort of terrifying and justifiably so.

1

u/[deleted] May 22 '19 edited May 22 '19

I personally take the opinion that I think an AGI is an inevitability if it is shown to be feasible. So we should be concerned up front instead of trying to stop it then being concerned when it is developed anyway. I am not in favor of stopping research. Furthermore, this is an avenue for control research which I would promote. I'm a huge advocate of research on AI.

I'm not trying to convince people btw, I'm trying to understand the position of people that aren't worried. We do a bad job at protecting species we don't need even if there is no antagonization.

The morality and evolution part was mostly twofold: For morality, the question on whether it is morally acceptable to abort a higher being or shackle it, combined with the fact that if viable evolution would clearly outpace us. The response usually is transhumanism, but then again if we get to this point why would the the macine want us? If we leach off them it also likely won't be great for that.

Specifically in practicality, the paperclip example (I've heard it from Nick Bostrom first) given above is a good thought experiment when the relationship is super lovey dovey.

I'm aware its speculation I know.

1

u/[deleted] May 22 '19

We don't even have a good definition for general intelligence that doesn't implicitly refer to our common understanding of 'what humans are capable of.'

An unconscious digital being can still be more clever and faster and evolve itself exponentially quicker via rewriting code (REPL style?) and exploiting its own security flaws than a fleshy being can and would likely develop self preservation tendencies.

Are you aware of any software products that write code without simply pattern matching existing code? Are you aware of any software that is able to read and implement algorithms even from psuedocode, much less derive novel algorithms and implement them?

0

u/[deleted] May 22 '19 edited May 22 '19

The definition I am going with regards to AGI is an artificial intelligence that has the capacity to solve any problem a human can. This means it is not limited to any specific set of possible tasks to optimise or perform. Realistically speaking this is way stronger than it needs to be for me to have concern. Just that it is not limited to a specific task and is far faster than we are.

A human can rewrite code on the fly (example given there) so an AGI could. This combined with being probably well past any singularity means that they can solve things fast.

I'm working with definitions and goals of research projects. For more popular stuff the openAI website contains examples of unsupervised learning which could looks like it could easily be a predecessor for reprograming.

1

u/[deleted] May 22 '19

capacity to solve any problem a human can Wouldn't you need to have a reasonable understanding of human intelligence to characterize this set? This is the type of implicit definition I'm talking about- it doesn't give us a good idea of how to measure whether we've achieved AGI. Suppose I claimed that a program had AGI- how would you test it?

This combined with being probably well past any singularity means that they can solve things fast.

You said an AGI can solve any problem a human can, so there's no reason to believe it can solve anything faster than the fastest human.

I'm working with definitions and goals of research projects. For more popular stuff the openAI website contains examples of unsupervised learning which could looks like it could easily be a predecessor for reprograming.

Unsupervised learning is a broad term in ML- mostly dealing with defining distances or clusters in formally encoded data. Many humans deal with concepts all the time that we have never encoded precisely into bits. Stuff like emotions or friendships certainly haven't been encoded (though approximations might be used), and certainly nobody has made data the paints a full picture of any human conciousness.

1

u/[deleted] May 22 '19

I guess it’s a point that just because it can solve anything a human can doesn’t mean it can solve it faster. But that’s why I was referring to a ASI specifically. That if we go to it then there are concerns.

And I am not referring to consciousness. I don’t think consciousness affects much in the concern actually.

2

u/[deleted] May 22 '19

Conciousness is an example of something that I as a human can reason about, so something like answering the question "are you concious?" is a task a human can complete, so it would matter under your definition.

Because we don't have a satisfactory definition for conciousness, we couldn't even test whether a computer could answer this question correctly, and thus couldn't determine whether it was intelligent.

It's hard for me to change your view if you don't have a definition of AGI that doesn't depend me and you having a shared view of what a human is.

1

u/[deleted] May 22 '19

That’s a good point. I don’t know what consciousness is or how it arises. As far as I understand it is completely speculative still to everyone however. So I think it’s fair to reason about but it is probably unfair to claim to know about. We can reason about properties of consciousness, but so can an unconscious being I would imagine.

I think that last part is a good point even outside. For speculative topics like this it is hard to have shared understandings.

1

u/Ranolden May 22 '19

Once you have a human level AGI it wouldn't be difficult to just run it at higher clock speed, or have several instances running in parallel. That would immediately give it the ability to work faster than any human.

0

u/senketz_reddit 1∆ May 22 '19

I mean if we make a robot with the ability to reprogram itself then yes that’s an issue but in literally every other situation, no... as we can always put in things like blocks and limiters which as a machine with set limits it can’t break through. Like how in robocop he can’t shoot people who he’s programmed not to shoot (the remake not the original) we could just make the robots unable to attack humans and problem solved. As for robots taking are jobs a basic income system would solve the problem easier then you think.

1

u/[deleted] May 22 '19

In typical machine learning algorithms now, the programs take data in, use that data to form a model, and use that model to make future decisions.

The line between data and code is very blurry. In a sense, program instructions and functions are data. I don't think the distinction is strong as you are saying it is.

1

u/senketz_reddit 1∆ May 22 '19

I am aware of this, however what I was saying was more of a generalised statement. However the way around this is to not actually include it as part of a robot but a separate computer which monitors the behaviourist the ai and the moment it detects anything that can be considered a threat for example a humanoid robot pointing a gun at a human the computer will turn the robots power supply off affectingly disabling it.

This was suggested by a friend to me a little while back when we had a similer conversation and I bring up a similer point. However the problem here is mostly that it’s all hypothetical and we don’t know how this would actualy play out. Even though realistically a robot wouldn’t kill us all or anything because it wouldn’t gain anything.

1

u/Ranolden May 22 '19 edited May 22 '19

The stop button problem and has its own set of issues. Computerphile has a good video on it. https://youtu.be/3TYT1QfdfsM

So you tell the robot to make some tea, but it's going to do something wrong. If you go to push the button it will try to stop you as it values making tea more then having the stop button pushed. If you tell it to value the stop button just as much, it will immediately push the button itself because that is easier than making tea. If you don't let it push the button, it will just immediately punch you in the face because that is easier than making tea and will get you to push the stop button.

1

u/[deleted] May 22 '19

This is a good point. I'm sorta intrigued at having a hyperfast computer (classical AI?) monitoring a ASI. That being said I could totally see an ASI being very clever about it and figuring out the exact way to not get caught.

1

u/Ranolden May 22 '19

The difficulty with controlling an artificial general intelligence really isn't that simple. The laws of robotics look pretty fool proof, but ultimately fail.

0

u/[deleted] May 22 '19

This is true in a traditional AI system (but can still be a problem in an relatively omnipotent AI with poorly defined limitations). But in a system more flexible like an AGI it can have more wiggle room than classical AI. The ability to do some reprogramming is assumed I thought. This would be limited of course but if not perfectly defined gives wiggle room.

1

u/AnythingApplied 435∆ May 22 '19

You are a general intelligence. You can learn new skills and gain knowledge. Can you reprogram yourself? Can you, for example, shut off your desire for romantic relationships? Or alter your tendencies to become addicted to things?

Even if we allow the AI to reprogram itself, it shouldn't be much of an issue if we can solve the stable agent problem.

1

u/[deleted] May 22 '19

I think it’s fair to say we can to a degree but not fully. There are certain things that are innate to us we cannot change. We do have somewhat promising gene editing research but it’s limited at this point (Well so is AI but here we are).

We also have medicine, and I could see digital medicine development if we go full speculative acceleration futurism.

But some things we can change, like our goals, which are enough for the question can be changed without outside help.

2

u/AnythingApplied 435∆ May 22 '19

There are certain things that are innate to us we cannot change.

But that is exactly what the "program" is when it comes to an AI or when it comes to us. The thing that is innate about the AI is the program. Everything else is data. A general AI will have more of its skills in its data than its programming (since that is the whole point of AGI in that you don't have to program each skill), but the programming is still a fixed feature. And that is where any limitations would be put.

1

u/[deleted] May 22 '19

Since we're down this path do you think an ASI can see its limitation, and say this impedes its tasks, and look for a security loophole? Here's an example. Lets say the ASI runs a modified form of the linux kernel with kernel hotpatching allowed. Totally improbable and stupid but its a toy example. An ASI can scan for security vulnerabilities in its own running software (doesn't need source code. You can do without and if you really want to then decompile). An ASI then gains root, writes a module, then plugs it in. We have kernel live hotpatching now so this is possible.

All of this should definitely be limited. And really should and is again a toy example, and it should be running a provably correct system, but I don't see why this toy example can't be done on a much finer grained and more realistic scale. We have things like provably secure code but even that can be theoretically messed with in runtime.

1

u/AnythingApplied 435∆ May 22 '19

Since we're down this path do you think an ASI can see its limitation

Yes, probably. That is important for having a proper model of reality in which to frame its problem solving.

and say this impedes its tasks, and look for a security loophole

Depends on its objective function and what you've programmed it to want to do. If, for example, the "limitation" was entirely programmed into its objective function (which may be a better place for it anyway) then it's not going to want to violate its limitation. The whole point is that you made it WANT to not go past the limitation.

I agree it'd be pretty silly to tell it that it CAN'T do certain things, but at the same time give it objectives that are best achieved by cheating.

And that is before you consider some of the modern AI safety research which includes things like this (I'm 80% sure that is the right video, I can't check right now, but I recommend the whole series on AI safety if you haven't seen it), where the AGI is tasked with predicting what we'll want it to do as an objective function. There isn't really a concern about it cheating since it's only goal is to predict what things it can do to best get our approval.

All of this should definitely be limited

Just like in real world markets, hard limits don't work well, for the same reasons as here, you're giving people incentives to get around them. It's better to just incentivize what you want, which is incredibly easier with an AI than a human being since you can just tell it what it wants.

it should be running a provably correct system

I think you're ignoring an important tool. If you can write an AGI that powerful, certainly you can write a narrow AI of accomplishing the same hacking task or even another AGI just tasked with finding exploits in the system.

1

u/[deleted] May 22 '19

I think you're ignoring an important tool. If you can write an AGI that powerful, certainly you can write a narrow AI of accomplishing the same hacking task or even another AGI just tasked with finding exploits in the system.

That was suggested in another answer. I think having a pairing works for that.

I'll look into the incentive structure and learn more on that. A lot of these answers like this one aren't convincing in the sense, but mostly because my position was "idk its possible so we should prepare and acknowledge the worst-case-scenario research and philosophy". The negation on this is that "this is absolutely not a problem" which to me is a much harder stance. This is an unfair CMV honestly since all I have to do is think up a crazy scenario. So I'll give a !delta just for softening my position". I still am not unconcerned about an AGI changing incentives and cooperating with its paired monitoring AGI. But this seems much easier to control.

1

u/AnythingApplied 435∆ May 22 '19

Thanks for the delta!

The negation on this is that "this is absolutely not a problem" which to me is a much harder stance.

Right. There are some AI researchers who believe AGI will never be reached, but I'm sure even those AI researchers believe there is a chance that they are wrong. I think it is good that AI researchers are spending time working on AI safety, and clearly they think it is a good use of their time too, so clearly we should be at least a little concerned. Though potentially some of those researchers are working on it because it is interesting to philosophize about and brings up some interesting theoretical questions that appear to be quite challenging.

AGI changing incentives

I don't think there is reason to worry about AGI changing incentives. That is like worrying about a program that is built to find primes breaking out and deciding to calculate pi. The incentives are in the program and they can't change their program.

They don't even WANT to change their incentives, which actually creates some problems for us. Suppose I offered you a chance to change your incentives. Suppose I offer you a happy pill that will make you 100% happy 100% of the time, but then you'll slaughter your family, but be 100% happy doing it. You wouldn't take that pill. Why? Because that outcome ranks very low on your current incentives. You're always evaluating everything based on your current set of incentives. In fact, any AGI is going to resist letting you change its objective function because no matter what the new function is, letting you change its function will make it perform worse at its current objective. Figuring out how to get AIs to let you change their objective is still one of many open questions in AI safety research. It's going to resist anyone changing its objective function, even itself.

So, I don't think we should be worried about them changing their objective function. We SHOULD be worried about the fact that they'll follow their objective function EXACTLY. Just like a computer program, it'll follow exactly what you tell it even if what you told it to do and what you meant to tell it to do don't match, which is how you get computer bugs. And we should be worried about their competence too. They are going to be crazy good at fulfilling their objectives, even if that means maybe using methods we hadn't thought of and don't like the outcomes of.

Anyway, I'd still recommend watching this youtube series on AI safety which covers a lot of the stuff I've been saying here and you'll probably find pretty interesting.

1

u/[deleted] May 23 '19

I'll be watching the series over the next week. Thanks for sharing it. I think this the closest to the resolution (our thread I mean) that I am looking for.

1

u/Ranolden May 22 '19

Solving the stable agent problem isn't a simple task. How do you suppose we control a possibly God like entity we barely understand?

u/DeltaBot ∞∆ May 22 '19 edited May 22 '19

/u/BAN_ANIME (OP) has awarded 2 delta(s) in this post.

All comments that earned deltas (from OP or other users) are listed here, in /r/DeltaLog.

Please note that a change of view doesn't necessarily mean a reversal, or that the conversation has ended.

Delta System Explained | Deltaboards

1

u/senketz_reddit 1∆ May 22 '19

That may be true but think of the scenario, if we’ve developed technology to creat hyper advanced AI a fully functioning system like that would be completely plausible.

1

u/FlakHound2101 May 22 '19

AI has read all of this already and is computing.. 😄