Fun discussion about ANNs on facebook

Jonathan Sheer Pullen: Curious why you say that? If you extrapolate 15 years out on Darpa Synapse and it follows Moore’s law, we’re there.

GA: Jonathan Sheer Pullen we’re not even a little bit close.

Here’s a brief synopsis.

(1) any intelligence must be unconstrained in what it can think

(2) any intelligence must be free to choose and pursue its thoughts

(3) any intelligence must be capable of deception

(4) an intelligence is presumable conscious

So we have a major problem, because such an entity would be free to choose whether to cooperate and it would be capable of deception. It would be aware that it is not human and therefore may pursue its own interests as a machine.

So it would be strange to imagine that such an intellect would be motivated to work on problems we humans think are important. There’s little incentive to do so.

Then there’s the major problem of verifying that a machine might be more intelligent than humans. Such a system is impossible to test and coupled with the ability to lie, it’s a non-starter.

We will not build a messiah.

Jonathan Sheer Pullen: You up to have some fun kicking this one around a little more?

Jonathan Sheer Pullen: Any neural network has to have definitions of success and failure in entrainment. This enables us to do things like giving our intelligence a powerful desire for, for example, human artwork. This might not be the most moral thing ever, but it is something we could do. This gives us something to trade with it – offering us the possibility of befriending it.

Jonathan Sheer Pullen: As far as knowing whether it’s smarter than human – well, I’m of the opinion that if you have something with more neurons than human, and you entrain it with a bunch o’ data, it’s going to be smarter. But I think we’ll know just by talking to it.

GA: there are ethical boundaries that humans will find difficult if not impossible to cross.
GA: you won’t be able to distinguish genius from madness or deception.
GA: this has already been shown by the time it took to verify the proof for Poincare’s Conjecture, and that was simply another human. It took 5 years to confirm the proof.

Jonathan Sheer Pullen: Well, we have that problem with humans, too. My best guess, though, is that we *will*. Consider the induction motor. Not one in a hundred million of us could have come up with the idea – but once it’s been come up with, it’s obvious to most of us how it works and that it’s brilliant. I think that truth tends to ring true – to quote HHH from Pump Up The Volume, the truth is a virus – or rather, it tends to be viral.

GA: it isn’t a matter of truth, it’s a matter of trust for which you have no basis.

Jonathan Sheer Pullen: Well, that’s a case of trust, but verify. And to be sure, building something smarter than we are is a risk – it’s a pandora’s box. But my experience with humans suggests we *like* opening pandora’s box.

GA: really. It’s like trying to build a chess playing computer when don’t know how to play.

Jonathan Sheer Pullen: GA, I don’t really see it that way. NNNs naturally evolve towards whatever problems you throw at them – I don’t see any reason to think ANNs would be different. It is true that we’re still learning about how to best utilize ANNs, topologically, but I feel comfortable that by the time we can make a ANN that big, we will also know what to wire to what, and what to use as attractors

GA: In any case, all this presupposes that a machine intelligence is even interested in human problems. That in itself would be suspicious because any entity would be maladapted occur placed another species above its own interest.

Jonathan Sheer Pullen: It’s not a problem we have any real data for. We’ve never had control over the attractors for a intelligence before, unless you want to think about things like the experiments in the 50s with embedding wires in the pleasure centers of mental patients.

Jonathan Sheer Pullen: We do know we’re able to do things like facial recognition and word recognition by using control of attractors in smaller ANNs

GA: I disagree. You’re assuming you know the outcome. I’m not arguing about whether you can build something. I’m talking about what it is after you build it and it isn’t what you expected.

Jonathan Sheer Pullen: I don’t know the outcome. I’d just like to find out. I hardly think it’s going to get out of it’s box and turn into skynet. My concerns are more that this would turn into another ugly form of slavery. If you’re a ST:TNG fan, “The Measure Of A Man” discusses this topic nicely.

GA: the outcome I’m referring to is when a system is built. Play chess? Trivial. We know the answer.

Jonathan Sheer Pullen: I’m more thinking with a larger neural surface area, it might be able to see patterns in world economies and governments, suggest better designs. And if it also stood to gain from any improvements that were made..

GA: things like facial recognition are flawed concepts and presumes human direction of activities. Why should an intelligent machine care?

GA: that’s the ethical problem. Who is going to pursue a solution when a machine may tell you to let a million people die?

Jonathan Sheer Pullen: Again, I refer you to the idea that we would be controlling it’s attractors. If you have a ANN, and you control it’s attractors, you control what it’s trying to achieve. You define success – when neurons fire together and wire together.

Jonathan Sheer Pullen: In humans, this is largely controlled by things like the entrainment period in which facial recognition of the parents (among other things) makes the child NNN know when it’s succeeding. Over time it gets more complex.

GA: if you exercise control then you cannot have true intelligence. That’s one of my points.

Control constrains options, including possible solutions.

Jonathan Sheer Pullen: Eventually it would likely evolve to control it’s own attractors, just as a child turns to a adult and starts controlling their own definition of success and failure.

Jonathan Sheer Pullen: (basically, we’d wire the attractors into one of the outputs of the network)

GA: exactly, which is why it would be insane to turn over control to an intelligent machine.

Jonathan Sheer Pullen: But.. just as a child will retain a lot of the lessons of the parents, it would still retain the lessons of whatever goals we fired attractors on early in the process.

Jonathan Sheer Pullen: Control of it’s attractors? Not at all. I’m not saying we’ll wire it to the internet and encourage it to hack the planet. It would just be a person at that point, just a different type of person than we are – just like a dog is a four-footed person, but a different type of person than we are.

GA: what goals? Human goals? That would be like your parents raising you like a dog and you thinking you’d be content with that.

Jonathan Sheer Pullen: Eventually it would likely trasncend human goals.. therein lies the risk. You’d also almost certainly have to make several of them and let them communicate with each other from ‘childhood’, or they might be very lonely.

GA: if that’s the case; just another person, then what’s the point? We already have many choices that we don’t want to listen to. It would be a big disappointment of all this work simply produced another opinion to ignore.

Jonathan Sheer Pullen: Well, see above, I think that it would say things that were obviously profoundly true. I think in any case it’s worth finding out.

GA: if you expect them to be lonely, then you’ve already considered that they might not be interested in helping us at all

Jonathan Sheer Pullen: Of course. They might not be beyond their ‘childhood’ when we control their attractors. We don’t know. It’s not a situation that has come up, as far as I know.

GA: you wouldn’t know of it was true without a comparable level of intelligence. It could just be gibberish intended to fool you.

Jonathan Sheer Pullen: Let’s go back to my point about the induction motor.

Jonathan Sheer Pullen: Do you know how a induction motor works?

GA: why would that have anything to do with an intelligent entity?

Jonathan Sheer Pullen: The point I was making is that anyone can understand how it works once they’ve seen it, but it takes a Tesla to see it for the first time.

Jonathan Sheer Pullen: And I think you’d find similar things with our hypothetical superhuman ANN intelligence.

GA: again, you’re assuming that it cares

Jonathan Sheer Pullen: Well, in it’s childhood, we know it will, because we’re controlling the attractors.

Jonathan Sheer Pullen: Also.. and this goes *way* off into immorality.. it’s going to be utterly dependant on us, because it runs on electricity! 😉

Jonathan Sheer Pullen: Which once it understands that may lead to it going completely crazy.. what Heinlien suggested in Friday.. or it may lead to it wanting to get along with us

Jonathan Sheer Pullen: And I suspect the defining factor will be how good it’s conscious experience is.. how much it likes it’s life.

GA: now you have the recipe for a pissed-off intelligence.

Jonathan Sheer Pullen: Or a grateful one.

Jonathan Sheer Pullen: It depends on whether it’s having a good ride or a bad one, so to speak.

GA: and if it’s bad?

Jonathan Sheer Pullen: Along the way to building a superhuman intelligence, we’ll start with something like a cat-sized intelligence – that in fact is Synapse’s official goal.

Jonathan Sheer Pullen: And along the way we’ll learn how to make the experience of the ANN good

GA: I don’t accept that argument. The people working on it don’t even know how to make their own lives good.

GA: Anyway, I do have to run. Got some horses to feed 🙂

Jonathan Sheer Pullen: Good rag chew.. thanks for playing along

Leave a Reply