Archive for the ‘NNN’ Category

What if there *isn’t* a objective reality?

Tuesday, July 13th, 2021

One of the topics I do occasionally worry about is what if there just isn’t a objective reality? Since we know that our minds are easily powerful enough to generate a experience of reality being created out of whole cloth, this seems possible. It would explain how for some people the Jan 6 USA misadventure was a bunch of tourists on the lawn while for a bunch of other people it was a armed insurrection, for example. It could of course go a lot further than that. It’s a worrisome concept, because it can’t be disproven – but if there isn’t a objective reality I’d really like to reprogram the simulator so that *my* reality is more what I’d like to be doing.

Mania, islanding, and the Shannon limit, and stepped psych med dosing

Sunday, June 20th, 2021

This is going to be a article about one way mental illness can occur, with some side digressions into how we do not do a very good job of treating this particular way mental illness can occur.

So, those of us who don’t believe there’s some sort of voodoo going on in the human brain understand it to be a very, very large neural network. It has 10^11 neurons, broken up into probably somewhere around 10^8 subnets, and those neurons have both excite and inhibit inputs and are also affected by the chemical soup they live in in a number of ways – including that there is a limit to how many times a neuron can fire before it has to uptake chemicals that permit it to fire because firing uses up resources, that a bunch of neurons firing near each other are all working out of the same resource pool, and that the presence of various other neurotransmitters (and even some more exotic things like moving electromagnetic fields) can affect firing probability.

It is also possible there is additional voodoo going on – I’ve seen arguments that the brain is using relativistic effects, that it is using quantum effects similar to a quantum computer, that it is a lies-to-children simplified version of the actual system brought into Earth to help us understand, that it is actually a large radio receiver for a complex four-dimensional (or more) wave, and other less probable explanations. We can discuss things like how this relates to the soul in another article – this one is based on the idea that yes, it’s real hardware, and yes, it follows real physical laws.

One thing commonly commented about people who are experiencing mania is that they appear “fast”, sped up, and indeed you can observe in some percentage of manic folks a increase in the frequency and amplitude of some of the various “clocks” the brain uses to help synchronize operations (i.e. alpha and beta waves, which themselves are somewhat mysterious insofar as a EEG is only picking up a gross average of millions of neurons and even that is not likely to be too accurate given that the electrical signals have passed through the blood-brain barrier, bone, etc)

Anyway, it seems completely reasonable to think that during periods of mania, signalling is occurring faster. One clear law of nature we’re aware of is referred to as the Shannon limit, and it’s the idea that for any given bandwidth and signal to noise ratio there is a maximum signalling rate that can be successful. Attempts to exceed the Shannon limit (by signalling too fast) result in a breakdown of communication – the exact failure mode depends on the encoding method being used and some other variables.

I am fairly clear that some of the undesirable behaviors and effects of mania are the result of some of the signal pathways involved in connecting the various subnets that make up a person’s decision trees experiencing signalling that exceeds the Shannon limit, thusly resulting in islanding. Side effects here can include loss of generation of memory (and apparent ‘jumps’ in time from the manic person’s POV), extremely poor decision making akin to having inhibitions suppressed by alcohol, and all sorts of interesting delusions. I think all of this is what happens when some of the longer inhibitory circuits stop carrying data, or meaningful data, because they are signalling beyond their Shannon limit and thusly the signal arrives at the other end either hopelessly smeared or of inadequate amplitude to cause the neuron in question to receive the excitory or inhibitory input.

In my case one clear case of islanding that has been repeatedly observed is the presence of multiple personalities. This is not that I have DID but rather that this is what happens when islanding occurs in a neural network – you can think of a natural neural network as somewhat holographic and indeed a number of experiments (too many to document here, but I can write a separate article about this topic if there’s interest) bear this out.

(I should also clarify for those of you who aren’t familiar with operating a electrical grid – “islanding” occurs when individual parts of the system are out of touch with each other – in the case of the AC grid this would be because they’re physically disconnected or too far out of phase with each other to allow a connection to be made – neural networks can display similar behaviors and it’s possible to experiment with this with ANNs simply by progmatically disconnecting bits of them. We’ve had chances to explore a lot of the different ways islanding can behave in a natural neural network because of stroke, head injury, various experiments such as cutting the corpus callosum, and the like )

It is possible that this state is even a evolutionary advantage as having something which causes some members of the tribe to take risks they would not ordinarily take may be how we got to, for example, understanding that lobsters and crabs are edible. There are certainly advantages to taking intelligent risks.

Of course, one problem we have with this is that often people in this state will commit crimes and while they are clearly not guilty by reason of insanity, our legal system loves to punish folks and is ever eager to make more money for the people running private prisons by putting them in jail. (It’s also extremely profitable for the lawyers). I suspect the majority of nonviolent criminals are just unable to manage the imperfect nervous system evolution has given us – survival of the fittest turns out not to be the best fitness function for creating creatures that are well suited to today’s world – and also a number of them are probably victims of abuse from predecessors that also suffered from similar problems.

In the meantime, the solution that I have found – using stepped doses of a psych med stepped according to how fast the system is trying to run in order to prevent revving past the Shannon limit – seems to be frowned upon by western medicine. They prefer the ‘I have a hammer so every problem is a nail’ approach of using a steady state dose no matter where in the cycle the individual being dosed is. The net result of this tends to be that the best medications for depression are hugely inappropriate when not in a depressed state and the best medications for mania are hugely inapprorpiate when not in a manic state – therefore the patient ends up overmedicated and often decides to go off the medication because of the damage to their quality of life the medication is causing.

On the other paw, using a stepped dose – this is far easier when the cycle is predictable as mine is but can probably be done via measuring various metrics if the cycle is unpredictable – I don’t know, I haven’t had a oppertunity to test it – leads to very good results. There is no overmedication during periods that are not either manic or depressive peaks, and in the case of medication that suppresses mania you avoid amplifying depression – and also the drug does not lose control authority because it is not being overused.

(In this article, when I speak of a stepped dose, I mean a dose scaled to the need that steps up as the system tries to run faster and down as it returns to normal. One advantage I have that may or may not work with all people is I can tell how fast I’m running by how long it takes to get to sleep, and can step the dose up until I’m able to get to sleep within a hour of initiating sleep)

I should also mention that even with a stepped dose it is very helpful to have some complex activity to engage in during manic periods in order to keep a load on the engine, as it were. I suspect it helps a lot to have activities that follow hard laws (programming, electronics, etc) in order to avoid drifting too far into mystical/magical/delusional thinking, which is another risk involved with mania.

A problem with parable based religions

Sunday, October 25th, 2020

So, I can’t remember if I’ve already talked about this or not, but one of the things I have been thinking about is how to build a neurological operating system that truly sets us free – enabling us to experience anything we want while also making sure that the necessary work for our bodies to stay fed etc gets done.

Anyway, part of the question is how would you load it? A ideal situation would be to let you load it just by reading a book, but this is really unlikely to work, and this underlines a big problem with Abrahamic religions.

The idea is that we’ll read these books and they will fundamentally change our behavior, but in reality, the part of our mind with the decision trees in it and the storyteller part of our mind are only peripherally connected. What’s worse, unbeknownst to us (or at least most of us), we may actually have *no* idea why we’re making the decisions we are.

I can’t seem to find a link for the article right now, but I remember reading a article about people who had a corpus collosumectamy and then had a sign placed in front of one eye saying “put on your coat”. They would then do so, and then when asked why, they would say they were cold. The storyteller part of our mind certianly has a lot of skill on confabulating to justify decisions that were made, but I don’t think it actually has much ability to interrogate the compiled decision trees and determine *why* decisions are made, It likely has a good idea which decisions *will* be made (although knowing the mecahnism for that would also be fascinating), however training the storyteller portion of someone’s brain in, say, a parable, will probably not change the decisions they make.

This explains quite handily all the Christians behaving awfully – for example, the bible repeatedly goes on about treating immigrants decently, but many of the religious right feel warmly smug about treating them horribly. (They also justify their actions with “well, they broke the law”. Unjust laws were meant to be broken, and unjust governments meant to be unseated. This is the only way we can see progress over the arc of human existence, and we do indeed see progress.

Anyway, leaving the politik aside for a second, it still seems clear by looking at religious adherents and how often they fail to live up to the precepts of their religions that loading a neurological operating system using stories simply does not work. As I said, I suspect this is because it’s affecting the wrong part of the brain.

 

Variations on a theme : protecting incorrect core beliefs in a NNN

Wednesday, September 30th, 2020

So, I’ve been reading Thinking Fast And Slow, which talks about several things that I’ve already thought about considerably, but from the perspective of considerably more research than I’ve done about them. One of the things it’s underlined for me is the idea that our brains have both configuration that is still flexible and configuration that has been compiled – well, actually hardwired, via interconnections between neurons – so that it can run at sub-second speeds. As a musician I am trying very hard to make the connection between the music I imagine and what my fingers do be built this way – at the moment, it is for my right hand but not for my left.

Anyway, one of the things I’ve been thinking about is the right wing’s continued defense of Trump even though he’s obviously a abomination. One of my friends, out of ways to defend Trump directly, has a never-ending series of ad-hominem attacks for Biden. This is the same friend who was once talking about how we shouldn’t have government healthcare because it could involve the government paying for a citizen’s mistake even though he’s only alive because the government assisted him after he did something fairly boneheaded.

So I’ve been thinking about that, and about how we parrot the statements of our peers and the talking heads on the television without thinking about them, and part of what I’m contemplating is that we may do such things as part of the process that defends our core beliefs even when we know they’re wrong.

See, it takes a certain amount of neurochemical resources to rebalance our neural networks – one of the things that ends up happening is that subnets that become a large nexus point between interconnects are still relevant even if they represent a belief that’s been disproved, because other firing patterns still pass through them. Now, of course, as with things like a closed-head injury, there are systems in place to arrange for alternate wiring, however that process must be pace-limited by the fact that it’s actually consuming resources – making wiring connections between neurons in a human brain is *not* free – *firing* is not even free, it involves uptake of chemicals that must later be released and so there’s a limited amount of it that can happen for any given amount of time.

As a result, I would imagine we have evolved defense mechanisms that will protect core beliefs that large amounts of neural circuitry are routing through *even when we ourselves know they are wrong*. I wonder if that’s part of what’s going on with my friend, since the alternative involves him having a deep lack of self-awareness.

I also wonder – one of the things in general that’s difficult to absorb and understand about the right is how they can over and over see their cherished points of view being obviously proved false (the laffer curve, for example) and then go back ot them. ANd I wonder how much of that is the above phenomenon, and what sorts of checks and balances one needs to have in place to correct for the fact that humans will cling to beliefs that are provably wrong.

One part of what’s going on with this election is that people on the right are accusing nearly all news channels of being ‘fake news’ – so they are living in a alternate reality where Trump isn’t a evil bastard who steals from vendors and from the american people, is in massive debt, has routinely acted abysmally towards woman, is probably a white supremacist, and lies constantly. Instead, everything the media says is “leftist lies”. Now part of what’s alarming to me is this demonstrates they have no memory, because we can point to things like Trump’s handling of COVID as demonstrating that he’s making statements that provably turn out not to be true in ways we can all remember. What’s also alarming is even after Trump completely flubs COVID due to treating it kind of like the right treats global warming, the right will continue to go on about the “global warming hoax” – even though other science-y things demonstrated the scientists were right, they won’t recognize the pattern and start to listen to science. These people are not in touch with reality and they don’t know it and (possibly because of the above) there is no way to put them in touch with reality. I am not sure what the solution is going forward but I am starting to think freedom of the press should be slightly abridged such that things like Fox and Friends must actively say at the beginning of each show “This is entertainment only. We are going to lie to you. None of what we are saying is true.” or some such.

The challenges of conditional virginity

Thursday, January 30th, 2020

So, those of you who have talked to me about my ideas for a neural operating system to enable humans to experience much greater freedom with the same resources know that one of the things that I’ve talked about is ‘conditional virginity’, or perhaps ‘programmable virginity’ – the ability to forget something you’ve learned so you can experience it for the first time again, but only temporarily so you can compare the two experiences. Now, while human experiential memory is well suited for this kind of stunt, the way we learn decision trees (and muscle memory) *really* is not – both because these things involve more than one system in the brain and also because of the way they are stored – for obvious reasons they are indexed against need, not against when and how they were learned. So you can never *really* achieve beginner mind again once you’ve learned about something because even if you were to lose the memory of the first experience you would not lose the decision trees you built the first time you had the experience. And maybe this is just as well – I’ve been reading about a form of degenerative disease similar to alzheimers except involving the decision trees instead of the memory, and it sounds terrifying. I imaghine you would experience it almost as if someone else were driving the bus instead of you.. which does underline the fact that the part of us that is making the decisions and the part of us that is having the experience are two different things, and I’m still not at all sure if the part of us that is making the decisions doesn’t occasionally slide in a totally false experience on the part of us that is “on the ride”. It seems like this would have a distinct evolutionary advantage.

Wow..

Monday, January 7th, 2019

Only one musical post in all of 2018. Going to have to do better in 2019. I tracked ten different songs that I didn’t think were good enough to release in 2018, and I’ve tracked three so far in 2019. I’m not sure if I need to turn down the lint level, or if I’m just working towards another plateu. On the other paw, it’s not like I get emails clamoring for more of my music or anything 😉

One thing I’ve really been feeling is the sense of missing people. I miss Phoebe, I miss $PERSON, I don’t really ever seem to get over the people I’ve lost. I miss my uncle joe.. I’ve even reached the point of missing my dad, who is still in my life. (I have set up a camping trip with him – I’m not so stupid as to not fix the ones that can be fixed).

One of the things with Phoebe is remembering and regretting all the stupid things I said, especially during our break-up. I know that I participated in breaking that friendship too badly to be repaired and I wish that I had a time machine so I could do things somewhat differently.

Ah well, we go on. What other choice do we have?

I think part of what bothers me about missing $_PERSON at this point is that it’s been so long since I had any kind of contact that I have *no* idea who she is. At some point your copies of copies of memories have no real reliability to them at all, and generation loss has pretty much etched that one away to where it’s nothing but a guess. That combined with the sense that the things that pushed her away were not really me – I mean, they certainly weren’t who I would choose to be and they all occurred in extreme mental states.

Recently I spent some time talking to a facebook friend who seemed to have been experiencing a extreme mental state of her own. A number of my friends criticized me for this, or at least expressed doubt that this was a wise use of my time, but I am fairly sure that what I was doing fit nicely inside my philosophy of ‘be excellent to each other’, and that if more people behaved the way I do, the world would be a better place.

and I have to admit as I research neural networks, my half – and often scarred memories – combined with blackouts – of the periods where I wasn’t myself are telling. I’m fairly certain what I was experiencing was islanding – very large collections of subnets, large enough to be able to respond to stimuli but not large enough to sustain consciousness. This brings up the interesting question of, in DID, are the alters conscious? I’ve always assumed that they are, but then I’ve been doing kitteny neocortex research that is making me question that assumption.

One of the things I’ve realized is that there’s no way we currently know to know whether a neural network is having a conscious experience or not. A NN will learn, and respond to stimuli based on what it’s learned, whether or not the ‘magic’ of consciousness is there or not. At this point I tend to agree with the person who theorized that consciousness is what information feels like when it’s been processed, but I think that’s only true in a very specific context which likely has to do with the way temporal memory works. However, in building my unsupervised learning system for the kittens, I found myself implementing something very similar to short term memory because in order to do unsupervised learning in the model I’m currently using, you have to let LTP create the bindings first, *then* learn the lesson. You also have to keep track of previous lessons so you can unlearn them if they turned out to be wrong. (At least, to solve my particular problem that I’m working on at the moment you do).

I haven’t really come up with any new years resolutions – I have a vague sense that I’d like to exercise more, vape less, eat less, write more music, and generally try not to break anything critical about my life.

Learning to damp out panic attacks

Sunday, September 23rd, 2018

So, recently I’ve been thinking about a skill that I acquired some time ago, and I think I can explain how to do it if anyone else would like to learn.

Note that to *really* do this requires some hardware you’ll need to pick up somewhere – namely, a pulse meter and a EEG.

Training level 1: Learning to lower your pulse.

You’ll need to get a pulse meter, and stare at it and try to lower the number on it. Like any biofeedback training, this takes time, and you’ll be most successful at learning to do it if you start practicing when you’re *not* experiencing a panic attack *first*. As with all biofeedback training, your mind is going to figure out how to achieve your goal mostly without you – knowing your goal is to lower the number on the meter it will try various things until it figures it out. Just keep trying, and you’ll find your way.

Training level 2: Learning to increase the amplitude of your alpha waves.

You’ll need a EEG that displays your alphas as a easily readable graph or meter. See above notes – it’s a very similar training process. You may find it helpful to research meditation techniques – there’s a lot of literature about this elsewhere so I’ll assume you can find it. 😉

Optional training level 3: Learning to lower your blood pressure

This one is harder. Because reading blood pressure is such a slow process, you’ll need a lot of time to master lowering your blood pressure. This is where things like imagining your ‘happy place’ come into play. However, I find it’s generally not necessary to stop a panic attack, although it can help with the aftereffects of all that adrenaline dumping into your bloodstream.

Now that you’ve acquired the skills of lowering your heart rate and increasing your alphas, during a panic attack, do these three things

#1: Step one, take several long, slow, deep breaths.
#2: Step two, lower your heart rate consciously
#3: Step three, raise your alphas consciously
#4: Step four (optional), lower your blood pressure.

That’s it. If your mind is similar to mine, this will put you back in a mental state where your anxiety is not the largest thing in the picture and you can then figure out what to do about whatever event made you panic to begin with. The first few times you do it, it will help to have a heart rate monitor in front of you.

Luck vs Choice?

Monday, December 4th, 2017

So, one of the questions I tend to ask myself, as I talk to people who can’t troubleshoot simple machinery, is to what extent did I get lucky and to what extent have my choices led me to where I am?

It’s a worthwhile question. Did a simple throw of the genetic dice, or the path that I was led down, lead to me being capable of understanding almost any human-made system? Or is it my repeated choices to read, to study, to attempt to fix things even when I don’t actually know how, to ask questions of other people, to – not to put too fine a point on it – continuously learn and evolve over the course of my life?

Sometimes I get incredibly frustrated when talking to people who are not as capable as I am and who repeatedly insist that they can’t do something. Pretty much everything built by humans can be understood by humans and fixed by humans. And I wonder, is this a choice they’re making? Do people choose to be less capable than they are biologically able to be? Sometimes it feels extremely choice-driven – and yet, I am not at all clear whether it is or not. Re: previous discussions on free will, I think that not everyone has as large a list of options in their ‘what can I choose to do right this second’ list as I do, and I think some of that is that the more you learn, the larger your free will window becomes. So people who haven’t been imbued with a can-do attitude and experienced validation of that attitude literally can’t choose to believe that they can i.e. troubleshoot their car.

I have also seen people create large numbers of imaginary obstacles for themselves before they ever even attempt the job at hand. Now, I should mention that I think memetic disempowerment is a systematic problem with humanity – recently someone reminded me of the quote “All have sinned and fallen short of the glory of God” which I think is a *excellent* example of memetic disempowerment and one of the many reasons that Christianity deserves relegated to the dustbin of history. Yes, sure, believe you’re going to fail before try! That’ll help! I also think that there is a fair amount of memetic disempowerment that goes on in our educational system – repeatedly grading people is not likely to help them feel empowered unless they happened to start out at the top of the ladder – and in our consumer-driven world, since after all if you feel empowered enough you might not buy $WHATEVER.

I am sure I also create imaginary obstacles for myself, and I’m sure that I have also frustrated many people in the past in ways similar to how I am sometimes frustrated by others now. I do wonder, though, how much of this is a choice and how much of it is directed by the wiring and memetic programming?

Another question is, what do we owe those who can’t? The political powers that be would, it would seem, like to throw anyone who isn’t extremely capable in all areas under the bus – and I assume that sooner or later this will include me, since if we keep raising the hurdle, sooner or later I will not be able to jump it. It’s clear that if we wanted to feed and house everyone we could, but also that we feel warm and fuzzy about patting ourselves on the back as we throw those who are less capable under the bus. Personally, I think we should try and feed and clothe and house everyone – in fact, give everyone everything they want, to the extent of our capability – although there are those who argue that we wouldn’t enjoy things if we didn’t have to struggle for them.

I don’t know. Rereading this post, I feel kind of like it paints me as a awful person, and that isn’t really my intention at all.

From a facebook discussion : free will

Thursday, November 23rd, 2017

Well, the problem I have with saying I have free will is multifold. A: I am not sure I exist. “I” as a single entity might well be a illusion since I appear to be a cooperating collection of subnets, and experiments like cutting the corpus callosum argue strongly that I am not a single ego, that this is a illusion. B: I am not sure, if I do exist, that I’m not deterministic. Experimenting with artificial neural networks, I note that they tend strongly towards the deterministic unless measures are taken to keep them from being deterministic. C: I am not sure, if I do exist and am not deterministic, that it is free agency and not a RNG or random noise that is guiding my actions. And yet, the idea that I am a person wandering around taking actions of my own free will is very compelling. Especially when I start discussing the matter which seems very meta

Neural networks and what you can’t let go of

Wednesday, August 9th, 2017

I had a interesting thought the other day about natural neural networks and people who hold beliefs that are not reality-verifiable or are even likely to be false. This thought started in looking at climate change deniers and people who believe religions that don’t appear to match the reality I’m experiencing, but it’s gone a bit further than that.

This is more of my hand-wavy guesswork.

It has occurred to me that one of the major problems a NNN faces is that subnets will tend to build major nexus points. These nexus points would appear to us to be core beliefs – or even just important beliefs. Once one of these beliefs is built, and a whole lot of connections to a whole lot of other subnets route through it, we would naturally be extremely resistant to removing it because we literally would be less able to function without it. In the case of religious (or religiously political) people – and I probably fit into this somewhat – letting go of their religion would make it far more difficult for their mind to work for a while – it would be somewhat similar to having a stroke. Major confluences of subnets which represented key ideas would no longer be valid – and it would likely be difficult to remove all of the traces of subnets like these, especially since there is a lot of redundancy in the way NNNs tend to wire. We may be extremely resistant to throw out cherished ideas – even when they’re proven wrong – because throwing them out makes it difficult for us to function at all, because all sorts of traffic is routed through them. They end up forming the underpinning for our personalities and decision trees.

I think if this is true, this is something we all need to understand and figure out the implications of. Christians brag of their faith being unshakable – but it might well be if Jesus showed up in person and told them they were wrong they would not be able to accept or integrate it because their faith is often loaded virally on them when they’re very young and ends up forming the physical underpinning for large portions of their mental structure.