Archive for the ‘NNN’ Category

A problem with parable based religions

Sunday, October 25th, 2020

So, I can’t remember if I’ve already talked about this or not, but one of the things I have been thinking about is how to build a neurological operating system that truly sets us free – enabling us to experience anything we want while also making sure that the necessary work for our bodies to stay fed etc gets done.

Anyway, part of the question is how would you load it? A ideal situation would be to let you load it just by reading a book, but this is really unlikely to work, and this underlines a big problem with Abrahamic religions.

The idea is that we’ll read these books and they will fundamentally change our behavior, but in reality, the part of our mind with the decision trees in it and the storyteller part of our mind are only peripherally connected. What’s worse, unbeknownst to us (or at least most of us), we may actually have *no* idea why we’re making the decisions we are.

I can’t seem to find a link for the article right now, but I remember reading a article about people who had a corpus collosumectamy and then had a sign placed in front of one eye saying “put on your coat”. They would then do so, and then when asked why, they would say they were cold. The storyteller part of our mind certianly has a lot of skill on confabulating to justify decisions that were made, but I don’t think it actually has much ability to interrogate the compiled decision trees and determine *why* decisions are made, It likely has a good idea which decisions *will* be made (although knowing the mecahnism for that would also be fascinating), however training the storyteller portion of someone’s brain in, say, a parable, will probably not change the decisions they make.

This explains quite handily all the Christians behaving awfully – for example, the bible repeatedly goes on about treating immigrants decently, but many of the religious right feel warmly smug about treating them horribly. (They also justify their actions with “well, they broke the law”. Unjust laws were meant to be broken, and unjust governments meant to be unseated. This is the only way we can see progress over the arc of human existence, and we do indeed see progress.

Anyway, leaving the politik aside for a second, it still seems clear by looking at religious adherents and how often they fail to live up to the precepts of their religions that loading a neurological operating system using stories simply does not work. As I said, I suspect this is because it’s affecting the wrong part of the brain.

 

Variations on a theme : protecting incorrect core beliefs in a NNN

Wednesday, September 30th, 2020

So, I’ve been reading Thinking Fast And Slow, which talks about several things that I’ve already thought about considerably, but from the perspective of considerably more research than I’ve done about them. One of the things it’s underlined for me is the idea that our brains have both configuration that is still flexible and configuration that has been compiled – well, actually hardwired, via interconnections between neurons – so that it can run at sub-second speeds. As a musician I am trying very hard to make the connection between the music I imagine and what my fingers do be built this way – at the moment, it is for my right hand but not for my left.

Anyway, one of the things I’ve been thinking about is the right wing’s continued defense of Trump even though he’s obviously a abomination. One of my friends, out of ways to defend Trump directly, has a never-ending series of ad-hominem attacks for Biden. This is the same friend who was once talking about how we shouldn’t have government healthcare because it could involve the government paying for a citizen’s mistake even though he’s only alive because the government assisted him after he did something fairly boneheaded.

So I’ve been thinking about that, and about how we parrot the statements of our peers and the talking heads on the television without thinking about them, and part of what I’m contemplating is that we may do such things as part of the process that defends our core beliefs even when we know they’re wrong.

See, it takes a certain amount of neurochemical resources to rebalance our neural networks – one of the things that ends up happening is that subnets that become a large nexus point between interconnects are still relevant even if they represent a belief that’s been disproved, because other firing patterns still pass through them. Now, of course, as with things like a closed-head injury, there are systems in place to arrange for alternate wiring, however that process must be pace-limited by the fact that it’s actually consuming resources – making wiring connections between neurons in a human brain is *not* free – *firing* is not even free, it involves uptake of chemicals that must later be released and so there’s a limited amount of it that can happen for any given amount of time.

As a result, I would imagine we have evolved defense mechanisms that will protect core beliefs that large amounts of neural circuitry are routing through *even when we ourselves know they are wrong*. I wonder if that’s part of what’s going on with my friend, since the alternative involves him having a deep lack of self-awareness.

I also wonder – one of the things in general that’s difficult to absorb and understand about the right is how they can over and over see their cherished points of view being obviously proved false (the laffer curve, for example) and then go back ot them. ANd I wonder how much of that is the above phenomenon, and what sorts of checks and balances one needs to have in place to correct for the fact that humans will cling to beliefs that are provably wrong.

One part of what’s going on with this election is that people on the right are accusing nearly all news channels of being ‘fake news’ – so they are living in a alternate reality where Trump isn’t a evil bastard who steals from vendors and from the american people, is in massive debt, has routinely acted abysmally towards woman, is probably a white supremacist, and lies constantly. Instead, everything the media says is “leftist lies”. Now part of what’s alarming to me is this demonstrates they have no memory, because we can point to things like Trump’s handling of COVID as demonstrating that he’s making statements that provably turn out not to be true in ways we can all remember. What’s also alarming is even after Trump completely flubs COVID due to treating it kind of like the right treats global warming, the right will continue to go on about the “global warming hoax” – even though other science-y things demonstrated the scientists were right, they won’t recognize the pattern and start to listen to science. These people are not in touch with reality and they don’t know it and (possibly because of the above) there is no way to put them in touch with reality. I am not sure what the solution is going forward but I am starting to think freedom of the press should be slightly abridged such that things like Fox and Friends must actively say at the beginning of each show “This is entertainment only. We are going to lie to you. None of what we are saying is true.” or some such.

The challenges of conditional virginity

Thursday, January 30th, 2020

So, those of you who have talked to me about my ideas for a neural operating system to enable humans to experience much greater freedom with the same resources know that one of the things that I’ve talked about is ‘conditional virginity’, or perhaps ‘programmable virginity’ – the ability to forget something you’ve learned so you can experience it for the first time again, but only temporarily so you can compare the two experiences. Now, while human experiential memory is well suited for this kind of stunt, the way we learn decision trees (and muscle memory) *really* is not – both because these things involve more than one system in the brain and also because of the way they are stored – for obvious reasons they are indexed against need, not against when and how they were learned. So you can never *really* achieve beginner mind again once you’ve learned about something because even if you were to lose the memory of the first experience you would not lose the decision trees you built the first time you had the experience. And maybe this is just as well – I’ve been reading about a form of degenerative disease similar to alzheimers except involving the decision trees instead of the memory, and it sounds terrifying. I imaghine you would experience it almost as if someone else were driving the bus instead of you.. which does underline the fact that the part of us that is making the decisions and the part of us that is having the experience are two different things, and I’m still not at all sure if the part of us that is making the decisions doesn’t occasionally slide in a totally false experience on the part of us that is “on the ride”. It seems like this would have a distinct evolutionary advantage.

Wow..

Monday, January 7th, 2019

Only one musical post in all of 2018. Going to have to do better in 2019. I tracked ten different songs that I didn’t think were good enough to release in 2018, and I’ve tracked three so far in 2019. I’m not sure if I need to turn down the lint level, or if I’m just working towards another plateu. On the other paw, it’s not like I get emails clamoring for more of my music or anything 😉

One thing I’ve really been feeling is the sense of missing people. I miss Phoebe, I miss $PERSON, I don’t really ever seem to get over the people I’ve lost. I miss my uncle joe.. I’ve even reached the point of missing my dad, who is still in my life. (I have set up a camping trip with him – I’m not so stupid as to not fix the ones that can be fixed).

One of the things with Phoebe is remembering and regretting all the stupid things I said, especially during our break-up. I know that I participated in breaking that friendship too badly to be repaired and I wish that I had a time machine so I could do things somewhat differently.

Ah well, we go on. What other choice do we have?

I think part of what bothers me about missing $_PERSON at this point is that it’s been so long since I had any kind of contact that I have *no* idea who she is. At some point your copies of copies of memories have no real reliability to them at all, and generation loss has pretty much etched that one away to where it’s nothing but a guess. That combined with the sense that the things that pushed her away were not really me – I mean, they certainly weren’t who I would choose to be and they all occurred in extreme mental states.

Recently I spent some time talking to a facebook friend who seemed to have been experiencing a extreme mental state of her own. A number of my friends criticized me for this, or at least expressed doubt that this was a wise use of my time, but I am fairly sure that what I was doing fit nicely inside my philosophy of ‘be excellent to each other’, and that if more people behaved the way I do, the world would be a better place.

and I have to admit as I research neural networks, my half – and often scarred memories – combined with blackouts – of the periods where I wasn’t myself are telling. I’m fairly certain what I was experiencing was islanding – very large collections of subnets, large enough to be able to respond to stimuli but not large enough to sustain consciousness. This brings up the interesting question of, in DID, are the alters conscious? I’ve always assumed that they are, but then I’ve been doing kitteny neocortex research that is making me question that assumption.

One of the things I’ve realized is that there’s no way we currently know to know whether a neural network is having a conscious experience or not. A NN will learn, and respond to stimuli based on what it’s learned, whether or not the ‘magic’ of consciousness is there or not. At this point I tend to agree with the person who theorized that consciousness is what information feels like when it’s been processed, but I think that’s only true in a very specific context which likely has to do with the way temporal memory works. However, in building my unsupervised learning system for the kittens, I found myself implementing something very similar to short term memory because in order to do unsupervised learning in the model I’m currently using, you have to let LTP create the bindings first, *then* learn the lesson. You also have to keep track of previous lessons so you can unlearn them if they turned out to be wrong. (At least, to solve my particular problem that I’m working on at the moment you do).

I haven’t really come up with any new years resolutions – I have a vague sense that I’d like to exercise more, vape less, eat less, write more music, and generally try not to break anything critical about my life.

Learning to damp out panic attacks

Sunday, September 23rd, 2018

So, recently I’ve been thinking about a skill that I acquired some time ago, and I think I can explain how to do it if anyone else would like to learn.

Note that to *really* do this requires some hardware you’ll need to pick up somewhere – namely, a pulse meter and a EEG.

Training level 1: Learning to lower your pulse.

You’ll need to get a pulse meter, and stare at it and try to lower the number on it. Like any biofeedback training, this takes time, and you’ll be most successful at learning to do it if you start practicing when you’re *not* experiencing a panic attack *first*. As with all biofeedback training, your mind is going to figure out how to achieve your goal mostly without you – knowing your goal is to lower the number on the meter it will try various things until it figures it out. Just keep trying, and you’ll find your way.

Training level 2: Learning to increase the amplitude of your alpha waves.

You’ll need a EEG that displays your alphas as a easily readable graph or meter. See above notes – it’s a very similar training process. You may find it helpful to research meditation techniques – there’s a lot of literature about this elsewhere so I’ll assume you can find it. 😉

Optional training level 3: Learning to lower your blood pressure

This one is harder. Because reading blood pressure is such a slow process, you’ll need a lot of time to master lowering your blood pressure. This is where things like imagining your ‘happy place’ come into play. However, I find it’s generally not necessary to stop a panic attack, although it can help with the aftereffects of all that adrenaline dumping into your bloodstream.

Now that you’ve acquired the skills of lowering your heart rate and increasing your alphas, during a panic attack, do these three things

#1: Step one, take several long, slow, deep breaths.
#2: Step two, lower your heart rate consciously
#3: Step three, raise your alphas consciously
#4: Step four (optional), lower your blood pressure.

That’s it. If your mind is similar to mine, this will put you back in a mental state where your anxiety is not the largest thing in the picture and you can then figure out what to do about whatever event made you panic to begin with. The first few times you do it, it will help to have a heart rate monitor in front of you.

Luck vs Choice?

Monday, December 4th, 2017

So, one of the questions I tend to ask myself, as I talk to people who can’t troubleshoot simple machinery, is to what extent did I get lucky and to what extent have my choices led me to where I am?

It’s a worthwhile question. Did a simple throw of the genetic dice, or the path that I was led down, lead to me being capable of understanding almost any human-made system? Or is it my repeated choices to read, to study, to attempt to fix things even when I don’t actually know how, to ask questions of other people, to – not to put too fine a point on it – continuously learn and evolve over the course of my life?

Sometimes I get incredibly frustrated when talking to people who are not as capable as I am and who repeatedly insist that they can’t do something. Pretty much everything built by humans can be understood by humans and fixed by humans. And I wonder, is this a choice they’re making? Do people choose to be less capable than they are biologically able to be? Sometimes it feels extremely choice-driven – and yet, I am not at all clear whether it is or not. Re: previous discussions on free will, I think that not everyone has as large a list of options in their ‘what can I choose to do right this second’ list as I do, and I think some of that is that the more you learn, the larger your free will window becomes. So people who haven’t been imbued with a can-do attitude and experienced validation of that attitude literally can’t choose to believe that they can i.e. troubleshoot their car.

I have also seen people create large numbers of imaginary obstacles for themselves before they ever even attempt the job at hand. Now, I should mention that I think memetic disempowerment is a systematic problem with humanity – recently someone reminded me of the quote “All have sinned and fallen short of the glory of God” which I think is a *excellent* example of memetic disempowerment and one of the many reasons that Christianity deserves relegated to the dustbin of history. Yes, sure, believe you’re going to fail before try! That’ll help! I also think that there is a fair amount of memetic disempowerment that goes on in our educational system – repeatedly grading people is not likely to help them feel empowered unless they happened to start out at the top of the ladder – and in our consumer-driven world, since after all if you feel empowered enough you might not buy $WHATEVER.

I am sure I also create imaginary obstacles for myself, and I’m sure that I have also frustrated many people in the past in ways similar to how I am sometimes frustrated by others now. I do wonder, though, how much of this is a choice and how much of it is directed by the wiring and memetic programming?

Another question is, what do we owe those who can’t? The political powers that be would, it would seem, like to throw anyone who isn’t extremely capable in all areas under the bus – and I assume that sooner or later this will include me, since if we keep raising the hurdle, sooner or later I will not be able to jump it. It’s clear that if we wanted to feed and house everyone we could, but also that we feel warm and fuzzy about patting ourselves on the back as we throw those who are less capable under the bus. Personally, I think we should try and feed and clothe and house everyone – in fact, give everyone everything they want, to the extent of our capability – although there are those who argue that we wouldn’t enjoy things if we didn’t have to struggle for them.

I don’t know. Rereading this post, I feel kind of like it paints me as a awful person, and that isn’t really my intention at all.

From a facebook discussion : free will

Thursday, November 23rd, 2017

Well, the problem I have with saying I have free will is multifold. A: I am not sure I exist. “I” as a single entity might well be a illusion since I appear to be a cooperating collection of subnets, and experiments like cutting the corpus callosum argue strongly that I am not a single ego, that this is a illusion. B: I am not sure, if I do exist, that I’m not deterministic. Experimenting with artificial neural networks, I note that they tend strongly towards the deterministic unless measures are taken to keep them from being deterministic. C: I am not sure, if I do exist and am not deterministic, that it is free agency and not a RNG or random noise that is guiding my actions. And yet, the idea that I am a person wandering around taking actions of my own free will is very compelling. Especially when I start discussing the matter which seems very meta

Neural networks and what you can’t let go of

Wednesday, August 9th, 2017

I had a interesting thought the other day about natural neural networks and people who hold beliefs that are not reality-verifiable or are even likely to be false. This thought started in looking at climate change deniers and people who believe religions that don’t appear to match the reality I’m experiencing, but it’s gone a bit further than that.

This is more of my hand-wavy guesswork.

It has occurred to me that one of the major problems a NNN faces is that subnets will tend to build major nexus points. These nexus points would appear to us to be core beliefs – or even just important beliefs. Once one of these beliefs is built, and a whole lot of connections to a whole lot of other subnets route through it, we would naturally be extremely resistant to removing it because we literally would be less able to function without it. In the case of religious (or religiously political) people – and I probably fit into this somewhat – letting go of their religion would make it far more difficult for their mind to work for a while – it would be somewhat similar to having a stroke. Major confluences of subnets which represented key ideas would no longer be valid – and it would likely be difficult to remove all of the traces of subnets like these, especially since there is a lot of redundancy in the way NNNs tend to wire. We may be extremely resistant to throw out cherished ideas – even when they’re proven wrong – because throwing them out makes it difficult for us to function at all, because all sorts of traffic is routed through them. They end up forming the underpinning for our personalities and decision trees.

I think if this is true, this is something we all need to understand and figure out the implications of. Christians brag of their faith being unshakable – but it might well be if Jesus showed up in person and told them they were wrong they would not be able to accept or integrate it because their faith is often loaded virally on them when they’re very young and ends up forming the physical underpinning for large portions of their mental structure.

What I’d do if I could

Sunday, June 18th, 2017

WARNING: This gets into some serious blue-sky territory

So, recently, I mentioned that I wouldn’t give power to certain conservatives who are in favor of criminalization of marijuana – and I think you all know I don’t smoke it but I’m a ally for those who do – and SS asked if I favored a America of exclusion.

Well, yes and no. I gave him a very short answer, which is that I favor a world where no one has any power over anyone else, but I thought I’d give the longer answer which is how I’d implement it if I were king.

I would load a hypervisor in everyone’s head, and network everyone together. Their bodies would be decoupled from their conscious experience. All physical possessions would be neural software – they would be able to have the same experience they’re having now, or wildly different experiences – a lot of experiences denied to all but a few would become open to everyone, such as the experience of being a rock star (simulated crowd unless you get *really* good at it and real people want to come see you, but I’d be into playing a simulated crowd, I’m not picky..)

A lot of experiences, like being in massive amounts of pain as your body fails, would go away. You’d have a interface for blocking people or locating new people you’d like to be in your life, for defining what you’d like your homes to look like and switching between them, for adding possessions – look at the video game The Sims, and you get a good idea of a lot of the interface you’d need. And you could fly with the blue angels, or be a rock star, or go mountain climbing, or drive in NASCAR, or whatever.

Now, at this point, “you” are a virtualized entity running under a hypervisor. Guess what this means – we can move you from body to body! You’d very likely be immortal as long as our society holds together. I’m assuming if Heaven (or $RELIGIOUS_UTOPIA) exists, this is part of it. I sometimes think we’re already in it and we’ve lost the instruction manual.

Anyway, you could be a despot or a fascist leader if you want – but, similar to being a rock star, you probably only get to have subjects if you’re good at it. Otherwise, it’s simSubjects for you. But I’d probably include code to allow you to forget that fact if you wanted to, so you could *think* you were ruling the free world. I’d also include ‘conditional virginity’ – (note that a lot of these are NOT my ideas, but the ideas of someone I talk to – $person’s future self, so to speak) so you could forget a experience you had temporarily so you could have it for the first time again.

Now, there are some serious challenges. We’d have to really master security in information systems, or we’d end up with people with all kinds of nasty virii loaded. (Well, we kind of have that situation now, don’t we ;-)). However, the advantages are pretty staggering. Among other things, a separate much smaller collection of neural code running under the hypervisor could do whatever body-care things needed to happen including farming, feeding, etc. In the meantime, you could eat a ten course meal if you wanted to and never gain a pound.

In addition, you could either choose to learn things ‘the hard way’ for the joy of the journey, or ‘matrix-style’ – many times I think you’d want to learn them the hard way when they were related to creating art, because that is the only way it would be “yours” and not just the group skill in playing the guitar or whatever. And some things like learning athletic skills the journey is part of the fun and not to be missed.

Anyway, learning how to write code for natural neural networks and get it to run correctly is a big ask. But that’s where I’d go with my utopia, Steve.

Hypocrisy and neural networks

Monday, May 8th, 2017

So, as we see hypocrisy abound in our current world political situation, it’s become quite popular to criticise people on it. And I am not here to say that it is a good phenomenon – but it is certainly a *understandable* one.

So, first, before I head down this rabbit hole, let me draw your attention to videos of the Milgram experiments. One thing you will notice, over and over, is that people clearly were not of one mind about pushing the switch. They were obviously agonized over it, many of them protested or questioned the action, and yet ultimately the neural wiring that translated out to blind obedience of authority won. I know that I’ve discussed this before.

Now, I would say that this phenomenon is very closely related to hypocrisy. In both cases, you have collections of subnets that are at war with each other, or at least have a disagreement over what the correct action is. It’s pretty clear that religion does a much better job of programming people to say the right things than to do the right things, and what that may indicate is that religion does a good job of programming storyteller or verbal parts of our neocortex, but that a lot of the things that drive our actual actions are formed before religion ever gets it’s claws on us – they may be native to our DNA and the way it expresses itself, or formed in earlier childhood. Or it may be that they are formed later, but that some types of experiences lead to stronger collections of subnets than others. In any case, the thing to remember about hypocrisy is that generally I think you will find it happens when someone is of two minds about the subject.

For example, all the discussion about $CONSERVITIVE_POLITICAL_PARTY talking about how great $FAVORITE_RELIGION is while simultaneously doing things that are strongly against everything that $FOUNDING_RELIGIOUS_LEADER stood for are a great example of this. Some portion of their minds is in favor of tolerance and love and feeding the hungry and all of those things, but a larger portion of their minds is in favor of grabbing everything that isn’t nailed down, and possibly some things that are. (It is also, of course, possible that no part of their minds is in any way in favor of $RELIGION but that they are in favor of getting elected and since there is currently no punishment for lying on your way to office, there’s no reason not to claim to be in support of $RELIGION if it gets you the gig and the nice cushy salary for life)

However, assuming good faith for the moment, let’s suppose that they are sincere in their adoration of $RELIGION. That doesn’t mean that their whole mind is – and, no matter how persistent the illusion that we’re one single person per body, the truth is that we’re a huge collection of subnets, all with different goals and agendas and experience. I know that I’ve already referred to this, but I gesture you to the experiments of cutting the corpus collosum and the results that ensued.

I really think we’re not going to make serious progress until we start to accept some of the strengths and limitations of natural neural networks. Hypocrisy is in fact both. F. Scott Fitzgerald said “The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.” – and this is exactly the behavior we’re talking about here. Without it, we would never really be able to weigh the validity of contradictory but true ideas.