Archive for the ‘ANN’ Category

What I’d do if I could

Sunday, June 18th, 2017

WARNING: This gets into some serious blue-sky territory

So, recently, I mentioned that I wouldn’t give power to certain conservatives who are in favor of criminalization of marijuana – and I think you all know I don’t smoke it but I’m a ally for those who do – and SS asked if I favored a America of exclusion.

Well, yes and no. I gave him a very short answer, which is that I favor a world where no one has any power over anyone else, but I thought I’d give the longer answer which is how I’d implement it if I were king.

I would load a hypervisor in everyone’s head, and network everyone together. Their bodies would be decoupled from their conscious experience. All physical possessions would be neural software – they would be able to have the same experience they’re having now, or wildly different experiences – a lot of experiences denied to all but a few would become open to everyone, such as the experience of being a rock star (simulated crowd unless you get *really* good at it and real people want to come see you, but I’d be into playing a simulated crowd, I’m not picky..)

A lot of experiences, like being in massive amounts of pain as your body fails, would go away. You’d have a interface for blocking people or locating new people you’d like to be in your life, for defining what you’d like your homes to look like and switching between them, for adding possessions – look at the video game The Sims, and you get a good idea of a lot of the interface you’d need. And you could fly with the blue angels, or be a rock star, or go mountain climbing, or drive in NASCAR, or whatever.

Now, at this point, “you” are a virtualized entity running under a hypervisor. Guess what this means – we can move you from body to body! You’d very likely be immortal as long as our society holds together. I’m assuming if Heaven (or $RELIGIOUS_UTOPIA) exists, this is part of it. I sometimes think we’re already in it and we’ve lost the instruction manual.

Anyway, you could be a despot or a fascist leader if you want – but, similar to being a rock star, you probably only get to have subjects if you’re good at it. Otherwise, it’s simSubjects for you. But I’d probably include code to allow you to forget that fact if you wanted to, so you could *think* you were ruling the free world. I’d also include ‘conditional virginity’ – (note that a lot of these are NOT my ideas, but the ideas of someone I talk to – $person’s future self, so to speak) so you could forget a experience you had temporarily so you could have it for the first time again.

Now, there are some serious challenges. We’d have to really master security in information systems, or we’d end up with people with all kinds of nasty virii loaded. (Well, we kind of have that situation now, don’t we ;-)). However, the advantages are pretty staggering. Among other things, a separate much smaller collection of neural code running under the hypervisor could do whatever body-care things needed to happen including farming, feeding, etc. In the meantime, you could eat a ten course meal if you wanted to and never gain a pound.

In addition, you could either choose to learn things ‘the hard way’ for the joy of the journey, or ‘matrix-style’ – many times I think you’d want to learn them the hard way when they were related to creating art, because that is the only way it would be “yours” and not just the group skill in playing the guitar or whatever. And some things like learning athletic skills the journey is part of the fun and not to be missed.

Anyway, learning how to write code for natural neural networks and get it to run correctly is a big ask. But that’s where I’d go with my utopia, Steve.

Western Science

Thursday, January 5th, 2017

One of the problems I keep thinking about is that western science has one major flaw.

They don’t know what they’re measuring *with*. Until you know the answer to that question, you don’t know what you’re measuring. We don’t yet understand what we are – at least, if the hard problem of consciousness has been solved, no one has told me the good news. I’ve heard a lot of theories, but I haven’t heard one I’d call solid enough to call plausible yet.

In other words, dear scientists, please bump the priority on neuroscience and both ANN and NNN research. Dear warmongers, please stop wasting money blowing shit up until we can solve this more important problem. Kthx, Sheer.

Fun discussion about ANNs on facebook

Saturday, December 10th, 2016

Jonathan Sheer Pullen: Curious why you say that? If you extrapolate 15 years out on Darpa Synapse and it follows Moore’s law, we’re there.

GA: Jonathan Sheer Pullen we’re not even a little bit close.

Here’s a brief synopsis.

(1) any intelligence must be unconstrained in what it can think

(2) any intelligence must be free to choose and pursue its thoughts

(3) any intelligence must be capable of deception

(4) an intelligence is presumable conscious

So we have a major problem, because such an entity would be free to choose whether to cooperate and it would be capable of deception. It would be aware that it is not human and therefore may pursue its own interests as a machine.

So it would be strange to imagine that such an intellect would be motivated to work on problems we humans think are important. There’s little incentive to do so.

Then there’s the major problem of verifying that a machine might be more intelligent than humans. Such a system is impossible to test and coupled with the ability to lie, it’s a non-starter.

We will not build a messiah.

Jonathan Sheer Pullen: You up to have some fun kicking this one around a little more?

Jonathan Sheer Pullen: Any neural network has to have definitions of success and failure in entrainment. This enables us to do things like giving our intelligence a powerful desire for, for example, human artwork. This might not be the most moral thing ever, but it is something we could do. This gives us something to trade with it – offering us the possibility of befriending it.

Jonathan Sheer Pullen: As far as knowing whether it’s smarter than human – well, I’m of the opinion that if you have something with more neurons than human, and you entrain it with a bunch o’ data, it’s going to be smarter. But I think we’ll know just by talking to it.

GA: there are ethical boundaries that humans will find difficult if not impossible to cross.
GA: you won’t be able to distinguish genius from madness or deception.
GA: this has already been shown by the time it took to verify the proof for Poincare’s Conjecture, and that was simply another human. It took 5 years to confirm the proof.

Jonathan Sheer Pullen: Well, we have that problem with humans, too. My best guess, though, is that we *will*. Consider the induction motor. Not one in a hundred million of us could have come up with the idea – but once it’s been come up with, it’s obvious to most of us how it works and that it’s brilliant. I think that truth tends to ring true – to quote HHH from Pump Up The Volume, the truth is a virus – or rather, it tends to be viral.

GA: it isn’t a matter of truth, it’s a matter of trust for which you have no basis.

Jonathan Sheer Pullen: Well, that’s a case of trust, but verify. And to be sure, building something smarter than we are is a risk – it’s a pandora’s box. But my experience with humans suggests we *like* opening pandora’s box.

GA: really. It’s like trying to build a chess playing computer when don’t know how to play.

Jonathan Sheer Pullen: GA, I don’t really see it that way. NNNs naturally evolve towards whatever problems you throw at them – I don’t see any reason to think ANNs would be different. It is true that we’re still learning about how to best utilize ANNs, topologically, but I feel comfortable that by the time we can make a ANN that big, we will also know what to wire to what, and what to use as attractors

GA: In any case, all this presupposes that a machine intelligence is even interested in human problems. That in itself would be suspicious because any entity would be maladapted occur placed another species above its own interest.

Jonathan Sheer Pullen: It’s not a problem we have any real data for. We’ve never had control over the attractors for a intelligence before, unless you want to think about things like the experiments in the 50s with embedding wires in the pleasure centers of mental patients.

Jonathan Sheer Pullen: We do know we’re able to do things like facial recognition and word recognition by using control of attractors in smaller ANNs

GA: I disagree. You’re assuming you know the outcome. I’m not arguing about whether you can build something. I’m talking about what it is after you build it and it isn’t what you expected.

Jonathan Sheer Pullen: I don’t know the outcome. I’d just like to find out. I hardly think it’s going to get out of it’s box and turn into skynet. My concerns are more that this would turn into another ugly form of slavery. If you’re a ST:TNG fan, “The Measure Of A Man” discusses this topic nicely.

GA: the outcome I’m referring to is when a system is built. Play chess? Trivial. We know the answer.

Jonathan Sheer Pullen: I’m more thinking with a larger neural surface area, it might be able to see patterns in world economies and governments, suggest better designs. And if it also stood to gain from any improvements that were made..

GA: things like facial recognition are flawed concepts and presumes human direction of activities. Why should an intelligent machine care?

GA: that’s the ethical problem. Who is going to pursue a solution when a machine may tell you to let a million people die?

Jonathan Sheer Pullen: Again, I refer you to the idea that we would be controlling it’s attractors. If you have a ANN, and you control it’s attractors, you control what it’s trying to achieve. You define success – when neurons fire together and wire together.

Jonathan Sheer Pullen: In humans, this is largely controlled by things like the entrainment period in which facial recognition of the parents (among other things) makes the child NNN know when it’s succeeding. Over time it gets more complex.

GA: if you exercise control then you cannot have true intelligence. That’s one of my points.

Control constrains options, including possible solutions.

Jonathan Sheer Pullen: Eventually it would likely evolve to control it’s own attractors, just as a child turns to a adult and starts controlling their own definition of success and failure.

Jonathan Sheer Pullen: (basically, we’d wire the attractors into one of the outputs of the network)

GA: exactly, which is why it would be insane to turn over control to an intelligent machine.

Jonathan Sheer Pullen: But.. just as a child will retain a lot of the lessons of the parents, it would still retain the lessons of whatever goals we fired attractors on early in the process.

Jonathan Sheer Pullen: Control of it’s attractors? Not at all. I’m not saying we’ll wire it to the internet and encourage it to hack the planet. It would just be a person at that point, just a different type of person than we are – just like a dog is a four-footed person, but a different type of person than we are.

GA: what goals? Human goals? That would be like your parents raising you like a dog and you thinking you’d be content with that.

Jonathan Sheer Pullen: Eventually it would likely trasncend human goals.. therein lies the risk. You’d also almost certainly have to make several of them and let them communicate with each other from ‘childhood’, or they might be very lonely.

GA: if that’s the case; just another person, then what’s the point? We already have many choices that we don’t want to listen to. It would be a big disappointment of all this work simply produced another opinion to ignore.

Jonathan Sheer Pullen: Well, see above, I think that it would say things that were obviously profoundly true. I think in any case it’s worth finding out.

GA: if you expect them to be lonely, then you’ve already considered that they might not be interested in helping us at all

Jonathan Sheer Pullen: Of course. They might not be beyond their ‘childhood’ when we control their attractors. We don’t know. It’s not a situation that has come up, as far as I know.

GA: you wouldn’t know of it was true without a comparable level of intelligence. It could just be gibberish intended to fool you.

Jonathan Sheer Pullen: Let’s go back to my point about the induction motor.

Jonathan Sheer Pullen: Do you know how a induction motor works?

GA: why would that have anything to do with an intelligent entity?

Jonathan Sheer Pullen: The point I was making is that anyone can understand how it works once they’ve seen it, but it takes a Tesla to see it for the first time.

Jonathan Sheer Pullen: And I think you’d find similar things with our hypothetical superhuman ANN intelligence.

GA: again, you’re assuming that it cares

Jonathan Sheer Pullen: Well, in it’s childhood, we know it will, because we’re controlling the attractors.

Jonathan Sheer Pullen: Also.. and this goes *way* off into immorality.. it’s going to be utterly dependant on us, because it runs on electricity! 😉

Jonathan Sheer Pullen: Which once it understands that may lead to it going completely crazy.. what Heinlien suggested in Friday.. or it may lead to it wanting to get along with us

Jonathan Sheer Pullen: And I suspect the defining factor will be how good it’s conscious experience is.. how much it likes it’s life.

GA: now you have the recipe for a pissed-off intelligence.

Jonathan Sheer Pullen: Or a grateful one.

Jonathan Sheer Pullen: It depends on whether it’s having a good ride or a bad one, so to speak.

GA: and if it’s bad?

Jonathan Sheer Pullen: Along the way to building a superhuman intelligence, we’ll start with something like a cat-sized intelligence – that in fact is Synapse’s official goal.

Jonathan Sheer Pullen: And along the way we’ll learn how to make the experience of the ANN good

GA: I don’t accept that argument. The people working on it don’t even know how to make their own lives good.

GA: Anyway, I do have to run. Got some horses to feed 🙂

Jonathan Sheer Pullen: Good rag chew.. thanks for playing along

Are larger neural networks stable?

Tuesday, February 2nd, 2016

So, as we approach the singularity – and all indications are that in about 15 years we will be able to build a mind bigger than ours, if Moore’s law holds – one interesting question is whether a larger neural network than us would be stable.

This is a subject that, if Google is to be believed, is of much scholarly interest. I’m still not at a place to evaluate the validity of the discussions – I’m still working my way through a full understanding of neural coding – but I think it’s a interesting question to be asking.

One presumes that some sort of optimization process took place (either via evolution or design – or quite possibly both) in determining how large the human mind is – but whether it was a decision about stability or a decision about power consumption remains to be seen.

In a neural network of fixed size, it seems clear that you have to make some tradeoffs. You can get more intelligence out of your 10^11 neurons, but you will likely have to sacrifice some stability. You can also make tradeoffs between intelligence and speed, for example. But in the end, humans in general all have the same number of neurons, so in order to get more of one aspect of performance, you’re going to have to lose some other aspect.

When we start building minds bigger than ours, the question that occurs is, will they be more stable? Less? Will more neurons mean you can simultaneously have a IQ of 2000 (sorry, Holly!) and be rock solid, stable, and reliable? Or will it turn out that the further you delve into intelligence, the more the system tends to oscillate or otherwise show bad signs of feedback coupling?

Only time will tell. As the eternal paranoid optimist, my hope is that we will find that we can create a mind that can explain how to build a much better world – in words even a Trump supporter can understand. But my fear is that we’ll discover we can’t even build a trillion-neuron neural network that’s stable at all.

We also have to figure out how we’re going to treat our hypothetical trillion-neuron creation. Clearly it deserves the same rights as we have, but how do we compensate it for the miracles it can bring forth? What do we have to offer that it will want? And if we engineer a need into it so that it will want in order to have that need met, what moral position does that leave us in?