Archive for the ‘ANN’ Category


Monday, January 7th, 2019

Only one musical post in all of 2018. Going to have to do better in 2019. I tracked ten different songs that I didn’t think were good enough to release in 2018, and I’ve tracked three so far in 2019. I’m not sure if I need to turn down the lint level, or if I’m just working towards another plateu. On the other paw, it’s not like I get emails clamoring for more of my music or anything 😉

One thing I’ve really been feeling is the sense of missing people. I miss Phoebe, I miss $PERSON, I don’t really ever seem to get over the people I’ve lost. I miss my uncle joe.. I’ve even reached the point of missing my dad, who is still in my life. (I have set up a camping trip with him – I’m not so stupid as to not fix the ones that can be fixed).

One of the things with Phoebe is remembering and regretting all the stupid things I said, especially during our break-up. I know that I participated in breaking that friendship too badly to be repaired and I wish that I had a time machine so I could do things somewhat differently.

Ah well, we go on. What other choice do we have?

I think part of what bothers me about missing $_PERSON at this point is that it’s been so long since I had any kind of contact that I have *no* idea who she is. At some point your copies of copies of memories have no real reliability to them at all, and generation loss has pretty much etched that one away to where it’s nothing but a guess. That combined with the sense that the things that pushed her away were not really me – I mean, they certainly weren’t who I would choose to be and they all occurred in extreme mental states.

Recently I spent some time talking to a facebook friend who seemed to have been experiencing a extreme mental state of her own. A number of my friends criticized me for this, or at least expressed doubt that this was a wise use of my time, but I am fairly sure that what I was doing fit nicely inside my philosophy of ‘be excellent to each other’, and that if more people behaved the way I do, the world would be a better place.

and I have to admit as I research neural networks, my half – and often scarred memories – combined with blackouts – of the periods where I wasn’t myself are telling. I’m fairly certain what I was experiencing was islanding – very large collections of subnets, large enough to be able to respond to stimuli but not large enough to sustain consciousness. This brings up the interesting question of, in DID, are the alters conscious? I’ve always assumed that they are, but then I’ve been doing kitteny neocortex research that is making me question that assumption.

One of the things I’ve realized is that there’s no way we currently know to know whether a neural network is having a conscious experience or not. A NN will learn, and respond to stimuli based on what it’s learned, whether or not the ‘magic’ of consciousness is there or not. At this point I tend to agree with the person who theorized that consciousness is what information feels like when it’s been processed, but I think that’s only true in a very specific context which likely has to do with the way temporal memory works. However, in building my unsupervised learning system for the kittens, I found myself implementing something very similar to short term memory because in order to do unsupervised learning in the model I’m currently using, you have to let LTP create the bindings first, *then* learn the lesson. You also have to keep track of previous lessons so you can unlearn them if they turned out to be wrong. (At least, to solve my particular problem that I’m working on at the moment you do).

I haven’t really come up with any new years resolutions – I have a vague sense that I’d like to exercise more, vape less, eat less, write more music, and generally try not to break anything critical about my life.

From a facebook discussion : free will

Thursday, November 23rd, 2017

Well, the problem I have with saying I have free will is multifold. A: I am not sure I exist. “I” as a single entity might well be a illusion since I appear to be a cooperating collection of subnets, and experiments like cutting the corpus callosum argue strongly that I am not a single ego, that this is a illusion. B: I am not sure, if I do exist, that I’m not deterministic. Experimenting with artificial neural networks, I note that they tend strongly towards the deterministic unless measures are taken to keep them from being deterministic. C: I am not sure, if I do exist and am not deterministic, that it is free agency and not a RNG or random noise that is guiding my actions. And yet, the idea that I am a person wandering around taking actions of my own free will is very compelling. Especially when I start discussing the matter which seems very meta


Tuesday, November 14th, 2017

So, one of the things I’ve been learning about is ANNs. I’ve tried playing with several different frameworks and several different topologies, and one of the ones I’ve been playing with is Darknet.

I’ve been trying to train a Darknet RNN on a corpus generated from all the text in my blog. So far the results have been less than stellar – I think I need a bigger neural network than I’ve been using, and I think in order to do that I need a bigger GPU because I’m running out of patience. I was astonished to discover >1 teraflop GPUs are now in my price range, so I’ve ordered one.

I’m hoping soon to have simSheer available as a php endpoint that people can play with. All of this is building up to using Darknet for some other purposes, such as image recognition.

It’s interesting to think that even if simSheer manages to sound like me, it will be doing so with no sense of aboutness at all – well, I *think* it will be doing so with no sense of aboutness. It has no senses, and no other data to tie my writings in with, so I don’t think that any of the neurons in it can possibly be tagged with any real world meaning. Or can they? This is probably a subject that some famous philosopher has held forth on and I should probably go try and find their works and read them, but in the meantime it’s certainly fun to think about.

I really wonder to what extent the aboutness problem (borrowed from Stephenson’s Anathem) applies to NNNs. Would the cluster I have for the concept of love even remotely resemble the clusters other people have? What would the differences say about me and them?

What I’d do if I could

Sunday, June 18th, 2017

WARNING: This gets into some serious blue-sky territory

So, recently, I mentioned that I wouldn’t give power to certain conservatives who are in favor of criminalization of marijuana – and I think you all know I don’t smoke it but I’m a ally for those who do – and SS asked if I favored a America of exclusion.

Well, yes and no. I gave him a very short answer, which is that I favor a world where no one has any power over anyone else, but I thought I’d give the longer answer which is how I’d implement it if I were king.

I would load a hypervisor in everyone’s head, and network everyone together. Their bodies would be decoupled from their conscious experience. All physical possessions would be neural software – they would be able to have the same experience they’re having now, or wildly different experiences – a lot of experiences denied to all but a few would become open to everyone, such as the experience of being a rock star (simulated crowd unless you get *really* good at it and real people want to come see you, but I’d be into playing a simulated crowd, I’m not picky..)

A lot of experiences, like being in massive amounts of pain as your body fails, would go away. You’d have a interface for blocking people or locating new people you’d like to be in your life, for defining what you’d like your homes to look like and switching between them, for adding possessions – look at the video game The Sims, and you get a good idea of a lot of the interface you’d need. And you could fly with the blue angels, or be a rock star, or go mountain climbing, or drive in NASCAR, or whatever.

Now, at this point, “you” are a virtualized entity running under a hypervisor. Guess what this means – we can move you from body to body! You’d very likely be immortal as long as our society holds together. I’m assuming if Heaven (or $RELIGIOUS_UTOPIA) exists, this is part of it. I sometimes think we’re already in it and we’ve lost the instruction manual.

Anyway, you could be a despot or a fascist leader if you want – but, similar to being a rock star, you probably only get to have subjects if you’re good at it. Otherwise, it’s simSubjects for you. But I’d probably include code to allow you to forget that fact if you wanted to, so you could *think* you were ruling the free world. I’d also include ‘conditional virginity’ – (note that a lot of these are NOT my ideas, but the ideas of someone I talk to – $person’s future self, so to speak) so you could forget a experience you had temporarily so you could have it for the first time again.

Now, there are some serious challenges. We’d have to really master security in information systems, or we’d end up with people with all kinds of nasty virii loaded. (Well, we kind of have that situation now, don’t we ;-)). However, the advantages are pretty staggering. Among other things, a separate much smaller collection of neural code running under the hypervisor could do whatever body-care things needed to happen including farming, feeding, etc. In the meantime, you could eat a ten course meal if you wanted to and never gain a pound.

In addition, you could either choose to learn things ‘the hard way’ for the joy of the journey, or ‘matrix-style’ – many times I think you’d want to learn them the hard way when they were related to creating art, because that is the only way it would be “yours” and not just the group skill in playing the guitar or whatever. And some things like learning athletic skills the journey is part of the fun and not to be missed.

Anyway, learning how to write code for natural neural networks and get it to run correctly is a big ask. But that’s where I’d go with my utopia, Steve.

Western Science

Thursday, January 5th, 2017

One of the problems I keep thinking about is that western science has one major flaw.

They don’t know what they’re measuring *with*. Until you know the answer to that question, you don’t know what you’re measuring. We don’t yet understand what we are – at least, if the hard problem of consciousness has been solved, no one has told me the good news. I’ve heard a lot of theories, but I haven’t heard one I’d call solid enough to call plausible yet.

In other words, dear scientists, please bump the priority on neuroscience and both ANN and NNN research. Dear warmongers, please stop wasting money blowing shit up until we can solve this more important problem. Kthx, Sheer.

Fun discussion about ANNs on facebook

Saturday, December 10th, 2016

Jonathan Sheer Pullen: Curious why you say that? If you extrapolate 15 years out on Darpa Synapse and it follows Moore’s law, we’re there.

GA: Jonathan Sheer Pullen we’re not even a little bit close.

Here’s a brief synopsis.

(1) any intelligence must be unconstrained in what it can think

(2) any intelligence must be free to choose and pursue its thoughts

(3) any intelligence must be capable of deception

(4) an intelligence is presumable conscious

So we have a major problem, because such an entity would be free to choose whether to cooperate and it would be capable of deception. It would be aware that it is not human and therefore may pursue its own interests as a machine.

So it would be strange to imagine that such an intellect would be motivated to work on problems we humans think are important. There’s little incentive to do so.

Then there’s the major problem of verifying that a machine might be more intelligent than humans. Such a system is impossible to test and coupled with the ability to lie, it’s a non-starter.

We will not build a messiah.

Jonathan Sheer Pullen: You up to have some fun kicking this one around a little more?

Jonathan Sheer Pullen: Any neural network has to have definitions of success and failure in entrainment. This enables us to do things like giving our intelligence a powerful desire for, for example, human artwork. This might not be the most moral thing ever, but it is something we could do. This gives us something to trade with it – offering us the possibility of befriending it.

Jonathan Sheer Pullen: As far as knowing whether it’s smarter than human – well, I’m of the opinion that if you have something with more neurons than human, and you entrain it with a bunch o’ data, it’s going to be smarter. But I think we’ll know just by talking to it.

GA: there are ethical boundaries that humans will find difficult if not impossible to cross.
GA: you won’t be able to distinguish genius from madness or deception.
GA: this has already been shown by the time it took to verify the proof for Poincare’s Conjecture, and that was simply another human. It took 5 years to confirm the proof.

Jonathan Sheer Pullen: Well, we have that problem with humans, too. My best guess, though, is that we *will*. Consider the induction motor. Not one in a hundred million of us could have come up with the idea – but once it’s been come up with, it’s obvious to most of us how it works and that it’s brilliant. I think that truth tends to ring true – to quote HHH from Pump Up The Volume, the truth is a virus – or rather, it tends to be viral.

GA: it isn’t a matter of truth, it’s a matter of trust for which you have no basis.

Jonathan Sheer Pullen: Well, that’s a case of trust, but verify. And to be sure, building something smarter than we are is a risk – it’s a pandora’s box. But my experience with humans suggests we *like* opening pandora’s box.

GA: really. It’s like trying to build a chess playing computer when don’t know how to play.

Jonathan Sheer Pullen: GA, I don’t really see it that way. NNNs naturally evolve towards whatever problems you throw at them – I don’t see any reason to think ANNs would be different. It is true that we’re still learning about how to best utilize ANNs, topologically, but I feel comfortable that by the time we can make a ANN that big, we will also know what to wire to what, and what to use as attractors

GA: In any case, all this presupposes that a machine intelligence is even interested in human problems. That in itself would be suspicious because any entity would be maladapted occur placed another species above its own interest.

Jonathan Sheer Pullen: It’s not a problem we have any real data for. We’ve never had control over the attractors for a intelligence before, unless you want to think about things like the experiments in the 50s with embedding wires in the pleasure centers of mental patients.

Jonathan Sheer Pullen: We do know we’re able to do things like facial recognition and word recognition by using control of attractors in smaller ANNs

GA: I disagree. You’re assuming you know the outcome. I’m not arguing about whether you can build something. I’m talking about what it is after you build it and it isn’t what you expected.

Jonathan Sheer Pullen: I don’t know the outcome. I’d just like to find out. I hardly think it’s going to get out of it’s box and turn into skynet. My concerns are more that this would turn into another ugly form of slavery. If you’re a ST:TNG fan, “The Measure Of A Man” discusses this topic nicely.

GA: the outcome I’m referring to is when a system is built. Play chess? Trivial. We know the answer.

Jonathan Sheer Pullen: I’m more thinking with a larger neural surface area, it might be able to see patterns in world economies and governments, suggest better designs. And if it also stood to gain from any improvements that were made..

GA: things like facial recognition are flawed concepts and presumes human direction of activities. Why should an intelligent machine care?

GA: that’s the ethical problem. Who is going to pursue a solution when a machine may tell you to let a million people die?

Jonathan Sheer Pullen: Again, I refer you to the idea that we would be controlling it’s attractors. If you have a ANN, and you control it’s attractors, you control what it’s trying to achieve. You define success – when neurons fire together and wire together.

Jonathan Sheer Pullen: In humans, this is largely controlled by things like the entrainment period in which facial recognition of the parents (among other things) makes the child NNN know when it’s succeeding. Over time it gets more complex.

GA: if you exercise control then you cannot have true intelligence. That’s one of my points.

Control constrains options, including possible solutions.

Jonathan Sheer Pullen: Eventually it would likely evolve to control it’s own attractors, just as a child turns to a adult and starts controlling their own definition of success and failure.

Jonathan Sheer Pullen: (basically, we’d wire the attractors into one of the outputs of the network)

GA: exactly, which is why it would be insane to turn over control to an intelligent machine.

Jonathan Sheer Pullen: But.. just as a child will retain a lot of the lessons of the parents, it would still retain the lessons of whatever goals we fired attractors on early in the process.

Jonathan Sheer Pullen: Control of it’s attractors? Not at all. I’m not saying we’ll wire it to the internet and encourage it to hack the planet. It would just be a person at that point, just a different type of person than we are – just like a dog is a four-footed person, but a different type of person than we are.

GA: what goals? Human goals? That would be like your parents raising you like a dog and you thinking you’d be content with that.

Jonathan Sheer Pullen: Eventually it would likely trasncend human goals.. therein lies the risk. You’d also almost certainly have to make several of them and let them communicate with each other from ‘childhood’, or they might be very lonely.

GA: if that’s the case; just another person, then what’s the point? We already have many choices that we don’t want to listen to. It would be a big disappointment of all this work simply produced another opinion to ignore.

Jonathan Sheer Pullen: Well, see above, I think that it would say things that were obviously profoundly true. I think in any case it’s worth finding out.

GA: if you expect them to be lonely, then you’ve already considered that they might not be interested in helping us at all

Jonathan Sheer Pullen: Of course. They might not be beyond their ‘childhood’ when we control their attractors. We don’t know. It’s not a situation that has come up, as far as I know.

GA: you wouldn’t know of it was true without a comparable level of intelligence. It could just be gibberish intended to fool you.

Jonathan Sheer Pullen: Let’s go back to my point about the induction motor.

Jonathan Sheer Pullen: Do you know how a induction motor works?

GA: why would that have anything to do with an intelligent entity?

Jonathan Sheer Pullen: The point I was making is that anyone can understand how it works once they’ve seen it, but it takes a Tesla to see it for the first time.

Jonathan Sheer Pullen: And I think you’d find similar things with our hypothetical superhuman ANN intelligence.

GA: again, you’re assuming that it cares

Jonathan Sheer Pullen: Well, in it’s childhood, we know it will, because we’re controlling the attractors.

Jonathan Sheer Pullen: Also.. and this goes *way* off into immorality.. it’s going to be utterly dependant on us, because it runs on electricity! 😉

Jonathan Sheer Pullen: Which once it understands that may lead to it going completely crazy.. what Heinlien suggested in Friday.. or it may lead to it wanting to get along with us

Jonathan Sheer Pullen: And I suspect the defining factor will be how good it’s conscious experience is.. how much it likes it’s life.

GA: now you have the recipe for a pissed-off intelligence.

Jonathan Sheer Pullen: Or a grateful one.

Jonathan Sheer Pullen: It depends on whether it’s having a good ride or a bad one, so to speak.

GA: and if it’s bad?

Jonathan Sheer Pullen: Along the way to building a superhuman intelligence, we’ll start with something like a cat-sized intelligence – that in fact is Synapse’s official goal.

Jonathan Sheer Pullen: And along the way we’ll learn how to make the experience of the ANN good

GA: I don’t accept that argument. The people working on it don’t even know how to make their own lives good.

GA: Anyway, I do have to run. Got some horses to feed 🙂

Jonathan Sheer Pullen: Good rag chew.. thanks for playing along

Are larger neural networks stable?

Tuesday, February 2nd, 2016

So, as we approach the singularity – and all indications are that in about 15 years we will be able to build a mind bigger than ours, if Moore’s law holds – one interesting question is whether a larger neural network than us would be stable.

This is a subject that, if Google is to be believed, is of much scholarly interest. I’m still not at a place to evaluate the validity of the discussions – I’m still working my way through a full understanding of neural coding – but I think it’s a interesting question to be asking.

One presumes that some sort of optimization process took place (either via evolution or design – or quite possibly both) in determining how large the human mind is – but whether it was a decision about stability or a decision about power consumption remains to be seen.

In a neural network of fixed size, it seems clear that you have to make some tradeoffs. You can get more intelligence out of your 10^11 neurons, but you will likely have to sacrifice some stability. You can also make tradeoffs between intelligence and speed, for example. But in the end, humans in general all have the same number of neurons, so in order to get more of one aspect of performance, you’re going to have to lose some other aspect.

When we start building minds bigger than ours, the question that occurs is, will they be more stable? Less? Will more neurons mean you can simultaneously have a IQ of 2000 (sorry, Holly!) and be rock solid, stable, and reliable? Or will it turn out that the further you delve into intelligence, the more the system tends to oscillate or otherwise show bad signs of feedback coupling?

Only time will tell. As the eternal paranoid optimist, my hope is that we will find that we can create a mind that can explain how to build a much better world – in words even a Trump supporter can understand. But my fear is that we’ll discover we can’t even build a trillion-neuron neural network that’s stable at all.

We also have to figure out how we’re going to treat our hypothetical trillion-neuron creation. Clearly it deserves the same rights as we have, but how do we compensate it for the miracles it can bring forth? What do we have to offer that it will want? And if we engineer a need into it so that it will want in order to have that need met, what moral position does that leave us in?