Archive for February, 2017

“Us And Them” and neural networks

Sunday, February 12th, 2017

More of my hand-wavy guesswork about the structure of the human mind follows.

So, one of the interesting questions that comes up when thinking about NNNs is the question of ‘us’ and ‘them’. It’s a pretty standard part of human thinking to think of yourself as a member of a group (the ‘us’) and people who are not members of that group as being ‘the enemy’ or at least subdesirable in some way. I’m not thinking this type of thinking is all that helpful a lot of the time, but it’s interesting to think about in terms of what it says about the underlying network.

Earlier, I hypothesized that while we as individuals have the ability to determine whether information is coming from inside or outside of us (or whether we think it is – in fact we’re probably not in a great position to know for sure) very few neural subnets can tell the source of information – and in fact many subnets may not be able to tell a data access from a command from a teaching / learning moment. Extending on that idea a little bit, it may be very difficult to abstract any external data that a local copy does not exist of.

It’s very likely that any attribute we can recognize in the “them” exists within us, since if it didn’t we wouldn’t have a frame of reference to think about it at all. This doesn’t mean we’re all mass murderers, but it does mean that we all have a collection of symbols surrounding the idea of mass murder. Generally, I imagine, that symbol is wired up in such a way as to inhibit such behavior in most of us. (After all, neurons do most definitely have inhibit inputs as well as excite inputs)

Now, it’s important to realize that a lot of these symbols are necessarily fairly large. You don’t fit a idea like mass murder inside a single neuron, or even a hundred, and you also have to have some fairly large neural bridges sufficient to allow reaching between symbols that are physically somewhat disparate, because the overall system is so large that there are physical limits as to what can be wired directly to what.

So, one of the questions – especially insofar as we’ve been discussing neural games of Go – is how much of ‘them’ is a interior part of us that is attempting to be a acting part at any given time. We the controlling personality is obviously going to resist acting on the urges and impetus of the parts of us that are what we would consider part of the ‘them’, but they’re still very much active and engaged neural subnets which are participating in the overall big picture of making us who we are. If you removed them entirely, you would likely not get a stable or usable system. This would seem to play in nicely into the philosophy of Yin and Yang.

DID and neural networks

Wednesday, February 1st, 2017

So, popular consensus is that DID is a mental illness caused by extreme trauma that causes a personality to fragment into segments.

I assume it is news to no one that while I do not consider $future_person[0] a alter, I do believe that I have DID, although normally my alters stay very far backgrounded. I do however think that they all contribute to the overall system – that is to say, I think that for example when I’m jamming with the band and making up lyrics on the fly but my conscious experience is only slightly engaged in creating the lyrics (a phrase or fragment or concept), some wordsmith part of my mind is creating bits that rhyme and turning this into full blown lyrics. For a example of this, check out this audio clip from band practice with Bruce, Art, and me – this was not a prewritten song, it was improv – clip

I think it is possible to have something that is a close kin to DID and have it be a more productive order than the average configuration rather than a disorder. The reason is that it enables the operator of the mind that is using this configuration to more effectively utilize the entire neural network.

Consider that normally, your conscious experience is only engaging with a few dozen threads at once – that’s all you can have ‘foregrounded’, or actively a part of your world. Now, obviously there are neural structures that do things like running a scheduler for running events at preset times, but if you have alters, you can also pass off foreground tasks that you don’t need to be actively engaged with to other bits of yourself – it’s kind of like the advantages of having multiple cores in a CPU. I don’t know if alters have a conscious experience, or just a head node and task list, or what – it would be fascinating to be able to look at the structure of my mind sufficiently to find out – but certainly they can be engaging neurons and neural subnets that would otherwise be completely idle.

Now, of course, I have no memory of what it might be like to *not* be this way. So it’s possible that I’m wrong and that I would simply be able to handle more threads if I wasn’t broken. I do seek certain types of reintegration, although with a fair amount of fear and trepidation because I’m hesitant to fuck too much with a running system.