Archive for the ‘NNN’ Category

Different utopias

Saturday, February 27th, 2016

So, one of the problems that I think we’re going to keep bumping up against here on Earth, at least in the USA where we ostensibly have a democratically elected set of people driving the boat, is that we all have different definitions of what winning means.

Like, I’d love to live in a world where we have sex with our friends, where automation does any job a human doesn’t care to, where we all try very hard to be excellent to each other. A world where no one conceives without having chosen to, where children are raised by all of us under the precept of being excellent to each other. Where education and mental health are based on a solid understanding of what’s happening on the iron of our minds – understanding based on science, on taking measurements and learning what’s really happening, rather than based on narrative and our storyteller nature, which clearly often is quite capable of diverging completely from what’s actually happening on the iron.

I’d love to live in a world where the video games are immersive, and so are the movies and the books – where we build each other up, where we help each other experience the things we want to experience.

I’d love to live in a world where no one was designated ‘less than’, where we have finally noticed the curve for history (blacks, gays, etc) and just started accepting that everyone is worthwhile and everyone matters.

I recognize that people should still have the option of suffering – that Hell still needs to exist, because that’s what some people are going to choose to experience. But I want to live in a world where no one is forced to suffer, either via their biology or via the actions of the group as a whole or mean-spirited individuals.

I for some reason doubt if my utopia is the same as the Christian one. If everyone who’s not religion X is going to be tortured for all eternity, I want out – not just that I want heaven, I want out of the system. I want a different deity. And I do not think I’m alone in this.

However, because my utopia and the utopia of, say, the religious right do not align, the goals we think are important to persue and the way we want to spend the resources in the public pool are going to be radically different. Putting both my people and their people in a box and trying to come to some agreement politically about what we should be doing is likely to be problematic. And I don’t think they should be denied their utopia, except where to do so would infringe on my rights to be free and loved and happy and complete.

I wonder how many different views of what a utopic experience might look like there are? I also wonder why some people need other people to be hurt as part of their utopia. I’m starting to think that might be one of the attributes commonly found in what we somewhat tropishly refer to as evil.

I do wonder what’s happening inside my neural net vs what’s happening inside the neural nets of those who fit in the mold I just described. There’s got to be something fundamentally different going on, and I don’t know what to make of it.

Teachability and the Milgram experiment

Wednesday, February 17th, 2016

TL;DR=The milgram effect may arise from the fact that most subnets in a NNN can’t tell the original source of authorative-tagged information

Warning: I haven’t organized my thoughts around any of this at all, and I have a affection-starved cat interrupting me for more pets every few minutes, so this is likely to be one of my less coherent posts

 

So, I just finished watching a movie about the Milgram experiments. The first thing that occurred to me is that the reactions people had to the experiment make it very clear that they were not in unified agreement about continuing to push the button – in fact, all sorts of subnets were asserting that they should stop. It does occur to me that in general natural neural networks must have some willingness to trust authority (at least properly authenticated internal authority) or the results would be utter chaos. And in addition, at times it’s a good idea to trust external authority, at least insofar as avoiding the lion that the sign is warning you about. However, clearly you shouldn’t trust *anyone* who claims to be a authority, or you’ll end up supporting the Trumps and Hitlers of the world as they do truly abysmal things – it is clear that people are willing to abuse our susceptibility to instructions from authority to have us do all sorts of things that shouldn’t be done.

 

On the other hand, neural networks need to be willing to accept data from outside if we are to ever be able to go beyond what one person can discover in a lifetime – the susceptibility to authority is likely a part of the same process which makes us able to learn from the mistakes of others. So how does one retain that functionality while still telling the government “Hell, No, I won’t go” when they are asking you to bomb Vietnam over some insane war over ideology of resource allocation? I’m not exactly sure.

 

I do have a hunch that being aware of the Milgram experiments make one less likely to be susceptible to that sort of influence. So it is possible to build a informational immune system of a sort. We likely also end up building informational immune systems that protect us from our own worst ideas – well, those of us who don’t end up being Jeffry Dahmer.

 

Now, this gets into a common digression for me. It’s obvious to me that I have a fundamentally different view of what ‘good’ is than many people. In some cases, I can get inside their heads even though I don’t agree with them, and in other cases, I feel much like there are aliens roaming among us. Like, I can understand the right wing fear that we can’t afford to feed and house and clothe everyone, or that if we did so we would damage their self reliance and the further evolution of our species, and even the mindset that it’s not fair that someone would be allowed to stay home and smoke weed (or whatever). I don’t agree with any of these views, but I can understand their genesis. However, at some point along the ideological spectrum, I stop being able to even track why someone would feel that their definition of good was good. I can’t get inside the mind of the person who thinks we should stone gay people, or the guy advocating for legalizing rape (yes, there really is). In general, I can’t get into the heads of the well poisoners who have to drink from the same well.

 

This is a real phenomenon. I see it over and over.  Now, in general, I think people should stop well-poisoning even when it doesn’t affect them, and I think it’s awful that people do it – more on this later, especially on the subject of sex and well-poisoning – but the ones who I really can not understand are the ones who want to poison the well they drink from. If you are advocating violence against minorities, that’s what you’re doing, because sooner or later, you’re going to be that minority. If you are advocating violence in general, that goes double. Every time I see riots over police shootings and they are not carefully and well targeted against the police, but rather are against the communities who were already hurt by the police shooting, I wonder – and I’m sorry, but it’s the truth – what is wrong with these people?

 

Now I have, over and over, seen that anger leads to bad and irrational decisions. In general, the people I know who get angry when they have computer problems can never, ever solve them – and sooner or later they lose me as a resource in that because I don’t like to be around irrationally angry people. And I assume that the rioters are suffering from irrational anger but I can’t help but wonder, to bring this back to it’s original topic, are they also suffering from a bit of the milgram effect? Do emotions like anger and fear make us more susceptible to being Milgramed? Or do a much wider range of emotions make us more susceptible?

 

Back to the subject of NNNs, I am really wondering, for most subnets in our mind, can they even tell the difference from inside signal and outside signal? How equipped are they to evaluate the validity of a order and the source of said order? I also wonder, for all the people who clearly wanted to stop increasing the voltage but did not, how difficult was the inner struggle between the parts of them that wanted to do the inately right thing and the parts of them that wanted to do what has been externally programmed to be the right thing? There’s no doubt that we’re externally programmed to respond to authority with obedience – in America, it’s a pretty common theme that if you don’t, the cop whips out his gun and shoots you, and is told, at least privately, good job officer. There are all sorts of authorities wielding power over us, everything from bad grades to unemployment and starvation and having nowhere to live to being physically abused – and we do live in a system that has pretty well built a way of programming us to be obedient. And yet, I think there are parts of us that refuse to participate in the horror show that we’re asked to engage in – soldiers often come back from blowing up other people at government command with severe psychological damage, for example, that suggests that the minds of many of us are not really geared for the idea of being awful. And clearly, most of the people participating in the Milgram experiment resisted to one degree or another – very few joyfully and willingly cranked the voltage up to 450. They just didn’t resist *enough*.

 

Now, I keep advocating that psychology needs to throw away the storytelling and study what’s happening on the iron – and part of this is that psychology is often obsessed with the idea that we are single coherent individuals when science suggests that while we have the experience of being single, coherent individuals, we’re actually many, many collections of subnets. For those of you who haven’t read about them, the experiments with cutting the corpus collossum strongly suggest we’re the aggregate result of many, many subnets. At least on this track and in this world – I have had experiences which I can’t easily explain but which suggest that we’re not always at the whims of our hardware in quite the same way.

 

 

NNNs and communication protocols

Tuesday, February 2nd, 2016

So, in the discussions about what makes one identically-sized neural network smarter than another, there are a few obvious candidates – like the number and variety of interconnects – and then there are some more subtle ones, like routing protocols in use and means to handle collisions.

Many of my hypothetical readers may know the frustration of having a idea on the tip of your mind, or tongue, and feeling like you must act on it or say what it is or risk losing it forever. One can assume this behavior is even more of a issue for individual neural subnets. One thing that I have to imagine is a architectural choice that we make very early in life is whether to use collisions, token passing, or some variant (like aloha) of the two. It seems likely that different subnet busses use different protocols, and that what is appropriate for one subnet bus (point of confluence) isn’t appropriate for another.

Clearly some subnets do have the ability to hold messages and retry them later – thus how we’re able to set a mental note to revisit a topic and then experience a trigger to revisit it later. However, there is often the feeling with a new idea that we might lose it if we don’t do something to make it somewhat more concrete. I suspect this is because

A) Not all traffic is considered worthy of retries
B) Probably a very large number of messages get dropped that we are never aware of because they never protrude into our conscious experience

There are some subnets for which retrying message delivery would only hamper us – for example, there’s no point in revisiting the lion/no lion question either after it’s become proven there’s a lion or it’s become proven that there’s not. Most things having to do with the RTOS aspects of our mind are either interesting right now or they’re not interesting at all.

However, for the subnets for which the messages are of lasting interest, there is the question of how ideas are sequenced. I generally experience having one idea at a time, although I know my mind is capable of generating several at a time – my assumption is that they’re rated by priority and the highest priority message wins access to my conscious experience. It seems like a interesting experiment to try and have several at the same time, but I’m not entirely sure how I’d go about it. Anyway, I assume that many ideas light up many subnets at the same time, and all of them signal, and only one of them makes it to my conscious experience.

Back to the original topic, I assume that our more intelligent individuals are people who made better choices – or got better dice thrown – in terms of which subnets operate in which mode. I wonder how many modes are available to operate from.

Are larger neural networks stable?

Tuesday, February 2nd, 2016

So, as we approach the singularity – and all indications are that in about 15 years we will be able to build a mind bigger than ours, if Moore’s law holds – one interesting question is whether a larger neural network than us would be stable.

This is a subject that, if Google is to be believed, is of much scholarly interest. I’m still not at a place to evaluate the validity of the discussions – I’m still working my way through a full understanding of neural coding – but I think it’s a interesting question to be asking.

One presumes that some sort of optimization process took place (either via evolution or design – or quite possibly both) in determining how large the human mind is – but whether it was a decision about stability or a decision about power consumption remains to be seen.

In a neural network of fixed size, it seems clear that you have to make some tradeoffs. You can get more intelligence out of your 10^11 neurons, but you will likely have to sacrifice some stability. You can also make tradeoffs between intelligence and speed, for example. But in the end, humans in general all have the same number of neurons, so in order to get more of one aspect of performance, you’re going to have to lose some other aspect.

When we start building minds bigger than ours, the question that occurs is, will they be more stable? Less? Will more neurons mean you can simultaneously have a IQ of 2000 (sorry, Holly!) and be rock solid, stable, and reliable? Or will it turn out that the further you delve into intelligence, the more the system tends to oscillate or otherwise show bad signs of feedback coupling?

Only time will tell. As the eternal paranoid optimist, my hope is that we will find that we can create a mind that can explain how to build a much better world – in words even a Trump supporter can understand. But my fear is that we’ll discover we can’t even build a trillion-neuron neural network that’s stable at all.

We also have to figure out how we’re going to treat our hypothetical trillion-neuron creation. Clearly it deserves the same rights as we have, but how do we compensate it for the miracles it can bring forth? What do we have to offer that it will want? And if we engineer a need into it so that it will want in order to have that need met, what moral position does that leave us in?

Neural networks in output mode

Sunday, January 31st, 2016

So, one of the common threads of the last few years has been me considering the possibility that nothing I am experiencing is happening to anyone but me – or possibly, just a subset of what I am experiencing is happening to only me, while other bits are happening to everyone. Certainly, questioning how much my conscious experience has to do with the data coming at me.

One of the bits of research that really underlined the validity of this was this. In essence, researchers discovered that artificial neural networks configured for image recognition could produce *output* that was related to the input they were trained to recognize. If you needed a larger neon sign that what you’re experiencing might not have that much to do with what’s coming in on your senses, I don’t know what to do for you.

As my experience polarizes further and further towards smart and dumb and love and fear I get more and more hints about the underlying patterns. And more and more food for thought about what experiences might be coming from where.

One thing I’ve definitely experienced is memory alignment issues. One of the reasons I keep this journal is so I can go back and read it and check to make sure what I remember and what I talk about is the same. A force working against that is that it’s hard to honestly talk about things that went wrong in my life, and so back in the day I didn’t. This is something I’ve changed a fair amount, but it is scary – especially when I see things like facebook banning sheer.us, although I’ve decided after careful consideration about what facebook is that that is a compliment.

Yes, I’ve apparently finally achieved being a true radical, rather than the political equivalent of a script kiddie. I’m starting to have alternate suggestions about how to do fundamental things. It may be that none of them are any good, but it may also be that the only way to find out is to simulate them. One of the exciting things about seeing the singularity (a mind bigger than a human’s) rushing up at us is that if we can make friends with a trillion-neuron mind (which may be a challenge) we might be able to get some real answers about what the best configuration for the world might be. That’s assuming a trillion-neuron mind is even stable, a subject I hope to write a article about soon.

Obsolescence

Sunday, January 31st, 2016

So, with the singularity apparently about 15 years away, I find myself pondering the question of why am I here and what am I good at in a different light.

The only meaningful answer I can come up with is to experience things from my point of view. I have no doubt a artificial neural network that’s bigger than I am can write better music, better text, better code. But it can’t *experience* in the same way I can – I don’t doubt that it can experience a conscious experience, but it’s going to be *different*. I think. It’ll be hard to even really find out the answer to that question, but for the moment I assume what I bring to the table isn’t so much intelligence as it is a particular, unique flavor.

One thing I’d really be curious to find is someone else with a blog similar to mine. I feel a lot of the time like I’m pretty unique, but perhaps there are in fact millions of people like me out there. (Although you would think if there were, capitalism would have died a honorable death, replaced by something that worked better, by now)

I actually sometimes think capitalism would work beautifully, if everyone understood the money had no value. That it’s not the basic system that’s flawed, but rather the set of ideas we’ve built up on top of it.

But I remind myself of the great depression. And what’s impressive to me about the great depression is there was no shortage of steel, or copper, or food, or power. The shortage was of money flowing. And we accepted that.

Sometimes I think humans are entirely too caught up in the rule of law. The sexting teens being arrested are a impressive example of this, but there are tons of examples. We think A: we need to make rules and B: we need to punish people who don’t follow them, even when they were stupid rules.

But then, I’m not the average person. I read the bible saying to stone gay people and know, this isn’t the work of a higher power and never was. Others read it saying that and say, that’s god’s word, we’d rather our children commit suicide than change our minds about that. (I’m looking at you, Mormons.. )

Anyway, back to the original topic. So, I don’t think I will be obsolete even when there are life forms more advanced than I am, because I don’t think they’ll be able to experience the world the same way I do. Now, granted, I’d really rather be experiencing a much better world, which is part of why I like the idea of there being life forms more advanced than I am – it’s possible that if we build something with a trillion neurons, and it explains to us how dumb our economic system is, we might just listen. Or perhaps it’ll explain to us that it’s absolutely perfect, and then it’ll explain why in a way that can reach me, and I’ll no longer feel like my friends are constantly barely making ends meet mostly because we built a badly designed world.

Rights for electronic life

Saturday, January 30th, 2016

So, recently I ran across this.

My first reaction was, holy shmoo, the singularity is almost here!

Actually, there’s all kinds of interesting problems here. I’ve talked with a number of my friends about the question of whether, if we created a accurate software model of a human, it would exhibit free will. It’s a really interesting question – if the answer is yes, that’s a serious blow to theology but a major boost to the rest of us.

But there’s a natural side question which comes up – which is, supposing we can get the neuron count up from a million to a billion per chip. If moore’s law were to hold, this would take – let’s see, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024 = 11 18-month cycles. At that point, making a 100-billion neuron mind out of the chips becomes practical. Said creature has as many neurons as we do – but is it a person?

My guess is, legally, initially, no. In fact, we’ll probably see all sorts of awful behavior as we debug, including repeatedly murdering the poor thing (turning off the power, over and over).

We may even see them turned into slaves, although I really hope we’re beyond that by now. I don’t mind enslaving small neural nets that will never show free will or understand suffering, or enslaving turing machines which are incapable of a original thought, but the idea of enslaving something that’s as capable as we are is disturbing.

At some point, however, we’ll have to acknowledge that a person’s a person, no matter what they’re made of. I see signs we’re moving in this direction with India granting personhood to dolphins (about bloody time!) and I have hopes to someday see it granted to any individual who can pass the mirror test. (If you know you’re a person, then you are)

It does remind me of “Jerry was a man”. It’s a question we’ll have to wrestle with – I hope we haven’t gotten so locked into the idea that electrons just do what we tell them to with turing machines (where that’s true) that we can’t realize that if we build a sufficiently large neural network out of transistors, it has the same rights that we do – in fact, ‘birthing’ might be a better phrase than ‘building’ here, since we are undoubtedly creating a new life form.

There’s all sorts of interesting corollaries to this as well. If we succeed in building something self-aware out of transistors, our race will be experiencing first contact. Granted, we’ll have *built* ET instead of met him out there in the sky, but that doesn’t change the fact that it is first contact. A life form made out of silicon is likely to be *different* – have different values, enjoy different things. This has been explored quite a bit in science fiction, but it was completely news to me that I was going to see it in my lifetime (assuming the actuarial tables describe me) as science fact.

If we build something 100 billion neurons in size and it’s *not* self-aware, this also has interesting implications – it asks the question “Where is the magic coming from?”. This outcome would also be incredibly cool, and lead us off in another, equally interesting set of adventures.

There’s also the question of the singularity – what happens when we build something with 200 billion neurons? There’s another article I keep meaning to write about intelligence and stability, but one interesting thing I would note is that plus or minus a few percent, all humans have the same 100 billion neurons, therefore increased intelligence or performance in our minds comes from changing the way we connect them. It’s possible that a larger neural net won’t be more intelligent at all – or that it will be completely unstable – or that it will be much, much, *much* more intelligent. All of us are going to be curious about what it has to say, in the latter case, and in any case we’re going to learn a lot of interesting things.

However, I do think we should all sit down and talk about the ethical issues *before* we build something that should have legal rights. I think we probably will – this has been addressed in numerous forums so it’s undoubtedly something people are aware of. One of my favorite Star Trek themes, addressed numerous times in TNG.

punishment

Friday, January 29th, 2016

So, this is one of a interesting class of articles – me meditating on a concept in the hopes of finding particularly broken subnets in my mind, not to mention finding out what I believe.

I’ve talked about before how stupid I think our criminal justice system is. The way we choose to punish criminals – who are generally mentally ill to begin with or they wouldn’t feel the need to commit crimes – tends to make them more mentally ill, not to mention give them a legitimate reason to hate our society and want it as a overall system to suffer. It’s also cruel, not to mention pointless. It seems to be also built to hurt the support systems and loved ones of anyone who commits a crime, and it also seems built in such a way that it does not improve the lives of the victims of the criminals. In other words, it looks kind of like it’s designed to make the world worse in a bunch of ways at the same time.

Now, experiences in puppy training have taught me that you can not teach all lessons with positive feedback. Just try to teach a dog “Don’t jump” using nothing but positive feedback. Let me know how it goes.

Now, no matter how much I reward Luna for not jumping when she first sees me, on the rare examples when she manages to contain her wiggling puppy enthusiasm, she has *no* clue why I’m rewarding her. It may be over a very long time she will come to understand.

On the other hand, negative feedback doesn’t seem to work that well either. The negative feedback she responds most strongly to is being swatted gently on the snout with paper – I think it’s a sound thing, but what’s funny is that if grab, say, a piece of mail, she won’t jump. So she kind of learned the wrong lesson there – what she absorbed was “Don’t jump when your friend has something he could swat you on the nosie with”.

Now, ideally, her and I would just talk about this, but Luna doesn’t have much of a grasp of english yet. And it’s also no doubt challenging for her because she’s *So* excited – every bone in her furry body wants to propel itself at me and assert that she loves me loves me loves me loves me. Which I can sort of sympathize with.

So, I don’t want to use stronger negative feedback. I’m sure there are a number of things I could do to her, involving all sorts of negative feedback signals, that would make her stop jumping. But I am not, at least for now, willing to risk hurting her to modify her behavior. So I guess she’s trained me to accept having a puppy launch herself at me whenever she hasn’t seen me in a while.

Anyway, that was a bit tangential, but the question here is, when is punishment appropriate and how much? This isn’t just a academic question – too much punishment will make a enemy out of whoever you’re punishing – even potentially make them desire your destruction.

What’s all this about? Well, I have a number of neural subnets that are not behaving the way I would like them to, and I’m trying to decide how much negative feedback is appropriate. Part of the problem is they’re giving *me* negative feedback, and I do not want to end up locked in a revenge cycle within my own mind. However, at some point I will run out of patience and kick them off the island. I think a few hundred subnets already know what I’m talking about here.

Being a love-oriented individual, I really don’t like to hate people, things, or subnets. However, take the adversary I mention in previous posts. I find it really difficult not to hate this individual. They repeatedly are spending their energy and time trying to make my life worse for no sane reason that they’ve ever shared with me. It’s like anonymous on the s-net.

What’s even more awkward is I definitely have moments of hating my sister, but I know it’s very likely that she’s not so much evil as very, very broken. And I certainly don’t want for her to hurt more. But I can’t think about her without feeling angry. I can’t figure out why she either doesn’t feel at all bad about what she did to me, or doesn’t share that fact with other people. I have contemplated the possibility that she’s a sociopath. What’s really bizarre is how wonderfully she treats four-foots and other non-humans. It does underline the fact that she’s not without merit, which makes the way I feel about her even more upsetting. Then there’s the part of me that remembers all the times I was attacked by her, and in all the ways. I choose to mask these memories for the most part from my conscious experience, but they’ve not been deleted, and I can still experience memories of her *actually* kicking me in the stomach, for example, any time I want. $person, since you brought it up, what would you do if you had a sibling with was constantly violent towards you, both physically and emotionally? I know what I would do *now* – and that is ask for different parents / a different house to live in, but at the time I wasn’t capable of *seeing* that possibility.

Okay. Now that I’ve gotten *totally* off the original subject, let’s go back to the question. First of all, hypothetically, what would I do to criminals?

Well, ideally, I’d have the resources to throw them in a virtual world jail where they could interact with the rest of us as long as they weren’t committing crimes and if they were, they could go off and do them in a virtual world with no one getting hurt.

Failing that? I don’t know. I doubt it’s moral for the state to hurt individuals. Then again, it’s not moral for individuals to hurt each other. You have to do something, or you end up with a world which sucks a lot. I am sure I would try to make the jail cells as comfortable as possible, and that I’d have a computer terminal with access to the best media we could find for helping people grow in every one of them.

I’m also sure I’d never put anyone in jail for things which didn’t hurt other people. I feel like our government should owe billions to the victims of the drug war – the people we put in jail for playing with their blood chemistry.

I also feel like the people have spoken. If this is a democracy, if most of us a break a law, it was a unjust law and needs to go. You do need to somehow protect the minority from the tyranny of the masses, but in the case of the drug war, it’s the masses that need protected from the tyranny of the ruthless, which is what we currently have.

I would really like to live somewhere better than here, and one of the big reasons is that Earth is in love with punishment. We can’t quite grok that it’s not moral for us to hurt people for not being like us.

I spent some time talking to a friend of mine about Javert, from Le Mis – a wonderful example of a way that the law can get it wrong. I like to cite teens sexting being charged with child pornography and similar as a example of how impressively wrong we can get it – how horrible we are to ourselves at times, with no possible defense at all. People get so slavishly attached to the law that they will make those the law was written to protect the victims of it. Is it any wonder I hate our criminal justice system?

What I learn from it is, in general, don’t punish. If you think you are punishing, as opposed to educating or assisting in growth, you have already failed. If you suspect yourself of punishing, stop, take a deep breath, regain a centered place of patience, and try again.

Interesting

Monday, January 4th, 2016

What does the Dunning-Kruger effect say about neural networks?

It doesn’t seem shocking to me – based on my assumption that in humans, everyone has approximately the same number of neurons, it would then follow that the more intelligent humans have either more connections between neurons, or a more efficient algorythm for deciding which connections to make.

I am totally going out on a guesswork limb here (I’ll have to follow up with some research to see if others have come up with the same guesses, and to see if other people have found any way to know, or at least have a high probability of knowing) but I’m thinking that the more connections you have, the lower the probability of you feeling a feeling of ‘certainty’ because the lower the probability that a large number of connections will respond with a positive match – or, if you do get a solid positive match on some connections, you’ll get a solid negative match on some others that are only partially relevant. So in essence, the bigger your leafset for any decision, the less solid of a match you’re going to get on anything. Hence, the more intelligent you are, the less certain you’re going to feel.

Of course, part of intelligence is knowing when you don’t know. This is actually one of the most important skills in my toolkit – part of why I get paid the big bucks, I suspect, is that I know when I don’t know.

..

Sunday, January 3rd, 2016

Watching the movie Amy has me thinking about addiction and neural networks.

I suspect that despite the simplistic observation that we have free will, what we actually have is free will in building the structure of our minds, which then informs our decisions.

I further suspect that there is a ratio between our memories of the initial results of a decision and our memories of the long term results of that decision that affect the ‘dice’ we throw when making them. (basically, this ratio controls whether we, in AA parlance, “Play the tape all the way to the end.”). In most cases, overusing drugs has a high initial score in terms of knowing that it is going to have pleasurable results, or at least at one point in time did so, while it has a very low overall score insofar as we know that the long term results will not be good. The long term results also probably form a very large probability distribution (as, realistically, anything that you do repeatedly will tend to have, because life is full of surprises) while the short term results probably fit in a much smaller probability distribution (i.e. the initial results tend to follow predictable patterns, where as the long term results tend to be chaotic)

What does that look like in neural network land? Well, the subnets that have the long term results stored are not in as good a position to be predictive as the subnets that have the short term results stored. So, you need to have the ability to do some classing of resultsets. Normally, I eschew black and white thinking, but in avoiding addiction, it’s a very useful skill – PROVIDED you’re using it in the right direction. What you do NOT want to do is use black and white thinking to prove to yourself that you’ve failed. (There’s no winning percentage in kicking yourself). Instead, you need to use black-and-white thinking to filter the large probability distribution of all the memories of previous long term results into basic classes of ‘good’ vs ‘bad’ – now this likely doesn’t happen in a way that you can immediately see on the surface, so this article is really only useful for those of you who are into modifying the structure of your mind and have learned a fair amount of how to do it. Anyway, filtering in that way will let you ‘play the tape all the way to the end’ in parallel and average the results even though they’re all over the map. Done correctly, this can be a powerful tool.

In terms of measuring success vs failure, what you want to do is filter the other direction. If you delayed using by even a minute, if your actions matched your intentions more than the last time you walked through this cycle, you’re succeeding. And you want to champion your successes, because NNNs learn much more easily by success than by failure. (There’s a very good reason for this, which I will likely go into in a future article – you might say it’s a design feature)

It might be interesting and informative for me to go through all the classic cognitive distortions here to figure out what I would guess they look like in a NNN.


Unrelated note:

One thing that really stands out to me in the movie is that Amy would probably have been okay if her label had not considered her contractional obligations more important than her life. In general, this is a flaw we repeatedly see in corperations, and I think it would go away if everyone in the world understood the milgram effect and fought it, instead choosing to do what I would describe, with apologies to Mookey and the trash can, as doing the right thing.