Archive for the ‘NNN’ Category

Wow..

Monday, January 7th, 2019

Only one musical post in all of 2018. Going to have to do better in 2019. I tracked ten different songs that I didn’t think were good enough to release in 2018, and I’ve tracked three so far in 2019. I’m not sure if I need to turn down the lint level, or if I’m just working towards another plateu. On the other paw, it’s not like I get emails clamoring for more of my music or anything 😉

One thing I’ve really been feeling is the sense of missing people. I miss Phoebe, I miss $PERSON, I don’t really ever seem to get over the people I’ve lost. I miss my uncle joe.. I’ve even reached the point of missing my dad, who is still in my life. (I have set up a camping trip with him – I’m not so stupid as to not fix the ones that can be fixed).

One of the things with Phoebe is remembering and regretting all the stupid things I said, especially during our break-up. I know that I participated in breaking that friendship too badly to be repaired and I wish that I had a time machine so I could do things somewhat differently.

Ah well, we go on. What other choice do we have?

I think part of what bothers me about missing $_PERSON at this point is that it’s been so long since I had any kind of contact that I have *no* idea who she is. At some point your copies of copies of memories have no real reliability to them at all, and generation loss has pretty much etched that one away to where it’s nothing but a guess. That combined with the sense that the things that pushed her away were not really me – I mean, they certainly weren’t who I would choose to be and they all occurred in extreme mental states.

Recently I spent some time talking to a facebook friend who seemed to have been experiencing a extreme mental state of her own. A number of my friends criticized me for this, or at least expressed doubt that this was a wise use of my time, but I am fairly sure that what I was doing fit nicely inside my philosophy of ‘be excellent to each other’, and that if more people behaved the way I do, the world would be a better place.

and I have to admit as I research neural networks, my half – and often scarred memories – combined with blackouts – of the periods where I wasn’t myself are telling. I’m fairly certain what I was experiencing was islanding – very large collections of subnets, large enough to be able to respond to stimuli but not large enough to sustain consciousness. This brings up the interesting question of, in DID, are the alters conscious? I’ve always assumed that they are, but then I’ve been doing kitteny neocortex research that is making me question that assumption.

One of the things I’ve realized is that there’s no way we currently know to know whether a neural network is having a conscious experience or not. A NN will learn, and respond to stimuli based on what it’s learned, whether or not the ‘magic’ of consciousness is there or not. At this point I tend to agree with the person who theorized that consciousness is what information feels like when it’s been processed, but I think that’s only true in a very specific context which likely has to do with the way temporal memory works. However, in building my unsupervised learning system for the kittens, I found myself implementing something very similar to short term memory because in order to do unsupervised learning in the model I’m currently using, you have to let LTP create the bindings first, *then* learn the lesson. You also have to keep track of previous lessons so you can unlearn them if they turned out to be wrong. (At least, to solve my particular problem that I’m working on at the moment you do).

I haven’t really come up with any new years resolutions – I have a vague sense that I’d like to exercise more, vape less, eat less, write more music, and generally try not to break anything critical about my life.

Learning to damp out panic attacks

Sunday, September 23rd, 2018

So, recently I’ve been thinking about a skill that I acquired some time ago, and I think I can explain how to do it if anyone else would like to learn.

Note that to *really* do this requires some hardware you’ll need to pick up somewhere – namely, a pulse meter and a EEG.

Training level 1: Learning to lower your pulse.

You’ll need to get a pulse meter, and stare at it and try to lower the number on it. Like any biofeedback training, this takes time, and you’ll be most successful at learning to do it if you start practicing when you’re *not* experiencing a panic attack *first*. As with all biofeedback training, your mind is going to figure out how to achieve your goal mostly without you – knowing your goal is to lower the number on the meter it will try various things until it figures it out. Just keep trying, and you’ll find your way.

Training level 2: Learning to increase the amplitude of your alpha waves.

You’ll need a EEG that displays your alphas as a easily readable graph or meter. See above notes – it’s a very similar training process. You may find it helpful to research meditation techniques – there’s a lot of literature about this elsewhere so I’ll assume you can find it. 😉

Optional training level 3: Learning to lower your blood pressure

This one is harder. Because reading blood pressure is such a slow process, you’ll need a lot of time to master lowering your blood pressure. This is where things like imagining your ‘happy place’ come into play. However, I find it’s generally not necessary to stop a panic attack, although it can help with the aftereffects of all that adrenaline dumping into your bloodstream.

Now that you’ve acquired the skills of lowering your heart rate and increasing your alphas, during a panic attack, do these three things

#1: Step one, take several long, slow, deep breaths.
#2: Step two, lower your heart rate consciously
#3: Step three, raise your alphas consciously
#4: Step four (optional), lower your blood pressure.

That’s it. If your mind is similar to mine, this will put you back in a mental state where your anxiety is not the largest thing in the picture and you can then figure out what to do about whatever event made you panic to begin with. The first few times you do it, it will help to have a heart rate monitor in front of you.

Luck vs Choice?

Monday, December 4th, 2017

So, one of the questions I tend to ask myself, as I talk to people who can’t troubleshoot simple machinery, is to what extent did I get lucky and to what extent have my choices led me to where I am?

It’s a worthwhile question. Did a simple throw of the genetic dice, or the path that I was led down, lead to me being capable of understanding almost any human-made system? Or is it my repeated choices to read, to study, to attempt to fix things even when I don’t actually know how, to ask questions of other people, to – not to put too fine a point on it – continuously learn and evolve over the course of my life?

Sometimes I get incredibly frustrated when talking to people who are not as capable as I am and who repeatedly insist that they can’t do something. Pretty much everything built by humans can be understood by humans and fixed by humans. And I wonder, is this a choice they’re making? Do people choose to be less capable than they are biologically able to be? Sometimes it feels extremely choice-driven – and yet, I am not at all clear whether it is or not. Re: previous discussions on free will, I think that not everyone has as large a list of options in their ‘what can I choose to do right this second’ list as I do, and I think some of that is that the more you learn, the larger your free will window becomes. So people who haven’t been imbued with a can-do attitude and experienced validation of that attitude literally can’t choose to believe that they can i.e. troubleshoot their car.

I have also seen people create large numbers of imaginary obstacles for themselves before they ever even attempt the job at hand. Now, I should mention that I think memetic disempowerment is a systematic problem with humanity – recently someone reminded me of the quote “All have sinned and fallen short of the glory of God” which I think is a *excellent* example of memetic disempowerment and one of the many reasons that Christianity deserves relegated to the dustbin of history. Yes, sure, believe you’re going to fail before try! That’ll help! I also think that there is a fair amount of memetic disempowerment that goes on in our educational system – repeatedly grading people is not likely to help them feel empowered unless they happened to start out at the top of the ladder – and in our consumer-driven world, since after all if you feel empowered enough you might not buy $WHATEVER.

I am sure I also create imaginary obstacles for myself, and I’m sure that I have also frustrated many people in the past in ways similar to how I am sometimes frustrated by others now. I do wonder, though, how much of this is a choice and how much of it is directed by the wiring and memetic programming?

Another question is, what do we owe those who can’t? The political powers that be would, it would seem, like to throw anyone who isn’t extremely capable in all areas under the bus – and I assume that sooner or later this will include me, since if we keep raising the hurdle, sooner or later I will not be able to jump it. It’s clear that if we wanted to feed and house everyone we could, but also that we feel warm and fuzzy about patting ourselves on the back as we throw those who are less capable under the bus. Personally, I think we should try and feed and clothe and house everyone – in fact, give everyone everything they want, to the extent of our capability – although there are those who argue that we wouldn’t enjoy things if we didn’t have to struggle for them.

I don’t know. Rereading this post, I feel kind of like it paints me as a awful person, and that isn’t really my intention at all.

From a facebook discussion : free will

Thursday, November 23rd, 2017

Well, the problem I have with saying I have free will is multifold. A: I am not sure I exist. “I” as a single entity might well be a illusion since I appear to be a cooperating collection of subnets, and experiments like cutting the corpus callosum argue strongly that I am not a single ego, that this is a illusion. B: I am not sure, if I do exist, that I’m not deterministic. Experimenting with artificial neural networks, I note that they tend strongly towards the deterministic unless measures are taken to keep them from being deterministic. C: I am not sure, if I do exist and am not deterministic, that it is free agency and not a RNG or random noise that is guiding my actions. And yet, the idea that I am a person wandering around taking actions of my own free will is very compelling. Especially when I start discussing the matter which seems very meta

Neural networks and what you can’t let go of

Wednesday, August 9th, 2017

I had a interesting thought the other day about natural neural networks and people who hold beliefs that are not reality-verifiable or are even likely to be false. This thought started in looking at climate change deniers and people who believe religions that don’t appear to match the reality I’m experiencing, but it’s gone a bit further than that.

This is more of my hand-wavy guesswork.

It has occurred to me that one of the major problems a NNN faces is that subnets will tend to build major nexus points. These nexus points would appear to us to be core beliefs – or even just important beliefs. Once one of these beliefs is built, and a whole lot of connections to a whole lot of other subnets route through it, we would naturally be extremely resistant to removing it because we literally would be less able to function without it. In the case of religious (or religiously political) people – and I probably fit into this somewhat – letting go of their religion would make it far more difficult for their mind to work for a while – it would be somewhat similar to having a stroke. Major confluences of subnets which represented key ideas would no longer be valid – and it would likely be difficult to remove all of the traces of subnets like these, especially since there is a lot of redundancy in the way NNNs tend to wire. We may be extremely resistant to throw out cherished ideas – even when they’re proven wrong – because throwing them out makes it difficult for us to function at all, because all sorts of traffic is routed through them. They end up forming the underpinning for our personalities and decision trees.

I think if this is true, this is something we all need to understand and figure out the implications of. Christians brag of their faith being unshakable – but it might well be if Jesus showed up in person and told them they were wrong they would not be able to accept or integrate it because their faith is often loaded virally on them when they’re very young and ends up forming the physical underpinning for large portions of their mental structure.

What I’d do if I could

Sunday, June 18th, 2017

WARNING: This gets into some serious blue-sky territory

So, recently, I mentioned that I wouldn’t give power to certain conservatives who are in favor of criminalization of marijuana – and I think you all know I don’t smoke it but I’m a ally for those who do – and SS asked if I favored a America of exclusion.

Well, yes and no. I gave him a very short answer, which is that I favor a world where no one has any power over anyone else, but I thought I’d give the longer answer which is how I’d implement it if I were king.

I would load a hypervisor in everyone’s head, and network everyone together. Their bodies would be decoupled from their conscious experience. All physical possessions would be neural software – they would be able to have the same experience they’re having now, or wildly different experiences – a lot of experiences denied to all but a few would become open to everyone, such as the experience of being a rock star (simulated crowd unless you get *really* good at it and real people want to come see you, but I’d be into playing a simulated crowd, I’m not picky..)

A lot of experiences, like being in massive amounts of pain as your body fails, would go away. You’d have a interface for blocking people or locating new people you’d like to be in your life, for defining what you’d like your homes to look like and switching between them, for adding possessions – look at the video game The Sims, and you get a good idea of a lot of the interface you’d need. And you could fly with the blue angels, or be a rock star, or go mountain climbing, or drive in NASCAR, or whatever.

Now, at this point, “you” are a virtualized entity running under a hypervisor. Guess what this means – we can move you from body to body! You’d very likely be immortal as long as our society holds together. I’m assuming if Heaven (or $RELIGIOUS_UTOPIA) exists, this is part of it. I sometimes think we’re already in it and we’ve lost the instruction manual.

Anyway, you could be a despot or a fascist leader if you want – but, similar to being a rock star, you probably only get to have subjects if you’re good at it. Otherwise, it’s simSubjects for you. But I’d probably include code to allow you to forget that fact if you wanted to, so you could *think* you were ruling the free world. I’d also include ‘conditional virginity’ – (note that a lot of these are NOT my ideas, but the ideas of someone I talk to – $person’s future self, so to speak) so you could forget a experience you had temporarily so you could have it for the first time again.

Now, there are some serious challenges. We’d have to really master security in information systems, or we’d end up with people with all kinds of nasty virii loaded. (Well, we kind of have that situation now, don’t we ;-)). However, the advantages are pretty staggering. Among other things, a separate much smaller collection of neural code running under the hypervisor could do whatever body-care things needed to happen including farming, feeding, etc. In the meantime, you could eat a ten course meal if you wanted to and never gain a pound.

In addition, you could either choose to learn things ‘the hard way’ for the joy of the journey, or ‘matrix-style’ – many times I think you’d want to learn them the hard way when they were related to creating art, because that is the only way it would be “yours” and not just the group skill in playing the guitar or whatever. And some things like learning athletic skills the journey is part of the fun and not to be missed.

Anyway, learning how to write code for natural neural networks and get it to run correctly is a big ask. But that’s where I’d go with my utopia, Steve.

Hypocrisy and neural networks

Monday, May 8th, 2017

So, as we see hypocrisy abound in our current world political situation, it’s become quite popular to criticise people on it. And I am not here to say that it is a good phenomenon – but it is certainly a *understandable* one.

So, first, before I head down this rabbit hole, let me draw your attention to videos of the Milgram experiments. One thing you will notice, over and over, is that people clearly were not of one mind about pushing the switch. They were obviously agonized over it, many of them protested or questioned the action, and yet ultimately the neural wiring that translated out to blind obedience of authority won. I know that I’ve discussed this before.

Now, I would say that this phenomenon is very closely related to hypocrisy. In both cases, you have collections of subnets that are at war with each other, or at least have a disagreement over what the correct action is. It’s pretty clear that religion does a much better job of programming people to say the right things than to do the right things, and what that may indicate is that religion does a good job of programming storyteller or verbal parts of our neocortex, but that a lot of the things that drive our actual actions are formed before religion ever gets it’s claws on us – they may be native to our DNA and the way it expresses itself, or formed in earlier childhood. Or it may be that they are formed later, but that some types of experiences lead to stronger collections of subnets than others. In any case, the thing to remember about hypocrisy is that generally I think you will find it happens when someone is of two minds about the subject.

For example, all the discussion about $CONSERVITIVE_POLITICAL_PARTY talking about how great $FAVORITE_RELIGION is while simultaneously doing things that are strongly against everything that $FOUNDING_RELIGIOUS_LEADER stood for are a great example of this. Some portion of their minds is in favor of tolerance and love and feeding the hungry and all of those things, but a larger portion of their minds is in favor of grabbing everything that isn’t nailed down, and possibly some things that are. (It is also, of course, possible that no part of their minds is in any way in favor of $RELIGION but that they are in favor of getting elected and since there is currently no punishment for lying on your way to office, there’s no reason not to claim to be in support of $RELIGION if it gets you the gig and the nice cushy salary for life)

However, assuming good faith for the moment, let’s suppose that they are sincere in their adoration of $RELIGION. That doesn’t mean that their whole mind is – and, no matter how persistent the illusion that we’re one single person per body, the truth is that we’re a huge collection of subnets, all with different goals and agendas and experience. I know that I’ve already referred to this, but I gesture you to the experiments of cutting the corpus collosum and the results that ensued.

I really think we’re not going to make serious progress until we start to accept some of the strengths and limitations of natural neural networks. Hypocrisy is in fact both. F. Scott Fitzgerald said “The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.” – and this is exactly the behavior we’re talking about here. Without it, we would never really be able to weigh the validity of contradictory but true ideas.

“Us And Them” and neural networks

Sunday, February 12th, 2017

More of my hand-wavy guesswork about the structure of the human mind follows.

So, one of the interesting questions that comes up when thinking about NNNs is the question of ‘us’ and ‘them’. It’s a pretty standard part of human thinking to think of yourself as a member of a group (the ‘us’) and people who are not members of that group as being ‘the enemy’ or at least subdesirable in some way. I’m not thinking this type of thinking is all that helpful a lot of the time, but it’s interesting to think about in terms of what it says about the underlying network.

Earlier, I hypothesized that while we as individuals have the ability to determine whether information is coming from inside or outside of us (or whether we think it is – in fact we’re probably not in a great position to know for sure) very few neural subnets can tell the source of information – and in fact many subnets may not be able to tell a data access from a command from a teaching / learning moment. Extending on that idea a little bit, it may be very difficult to abstract any external data that a local copy does not exist of.

It’s very likely that any attribute we can recognize in the “them” exists within us, since if it didn’t we wouldn’t have a frame of reference to think about it at all. This doesn’t mean we’re all mass murderers, but it does mean that we all have a collection of symbols surrounding the idea of mass murder. Generally, I imagine, that symbol is wired up in such a way as to inhibit such behavior in most of us. (After all, neurons do most definitely have inhibit inputs as well as excite inputs)

Now, it’s important to realize that a lot of these symbols are necessarily fairly large. You don’t fit a idea like mass murder inside a single neuron, or even a hundred, and you also have to have some fairly large neural bridges sufficient to allow reaching between symbols that are physically somewhat disparate, because the overall system is so large that there are physical limits as to what can be wired directly to what.

So, one of the questions – especially insofar as we’ve been discussing neural games of Go – is how much of ‘them’ is a interior part of us that is attempting to be a acting part at any given time. We the controlling personality is obviously going to resist acting on the urges and impetus of the parts of us that are what we would consider part of the ‘them’, but they’re still very much active and engaged neural subnets which are participating in the overall big picture of making us who we are. If you removed them entirely, you would likely not get a stable or usable system. This would seem to play in nicely into the philosophy of Yin and Yang.

DID and neural networks

Wednesday, February 1st, 2017

So, popular consensus is that DID is a mental illness caused by extreme trauma that causes a personality to fragment into segments.

I assume it is news to no one that while I do not consider $future_person[0] a alter, I do believe that I have DID, although normally my alters stay very far backgrounded. I do however think that they all contribute to the overall system – that is to say, I think that for example when I’m jamming with the band and making up lyrics on the fly but my conscious experience is only slightly engaged in creating the lyrics (a phrase or fragment or concept), some wordsmith part of my mind is creating bits that rhyme and turning this into full blown lyrics. For a example of this, check out this audio clip from band practice with Bruce, Art, and me – this was not a prewritten song, it was improv – clip

I think it is possible to have something that is a close kin to DID and have it be a more productive order than the average configuration rather than a disorder. The reason is that it enables the operator of the mind that is using this configuration to more effectively utilize the entire neural network.

Consider that normally, your conscious experience is only engaging with a few dozen threads at once – that’s all you can have ‘foregrounded’, or actively a part of your world. Now, obviously there are neural structures that do things like running a scheduler for running events at preset times, but if you have alters, you can also pass off foreground tasks that you don’t need to be actively engaged with to other bits of yourself – it’s kind of like the advantages of having multiple cores in a CPU. I don’t know if alters have a conscious experience, or just a head node and task list, or what – it would be fascinating to be able to look at the structure of my mind sufficiently to find out – but certainly they can be engaging neurons and neural subnets that would otherwise be completely idle.

Now, of course, I have no memory of what it might be like to *not* be this way. So it’s possible that I’m wrong and that I would simply be able to handle more threads if I wasn’t broken. I do seek certain types of reintegration, although with a fair amount of fear and trepidation because I’m hesitant to fuck too much with a running system.

Western Science

Thursday, January 5th, 2017

One of the problems I keep thinking about is that western science has one major flaw.

They don’t know what they’re measuring *with*. Until you know the answer to that question, you don’t know what you’re measuring. We don’t yet understand what we are – at least, if the hard problem of consciousness has been solved, no one has told me the good news. I’ve heard a lot of theories, but I haven’t heard one I’d call solid enough to call plausible yet.

In other words, dear scientists, please bump the priority on neuroscience and both ANN and NNN research. Dear warmongers, please stop wasting money blowing shit up until we can solve this more important problem. Kthx, Sheer.