Teachability and the Milgram experiment

TL;DR=The milgram effect may arise from the fact that most subnets in a NNN can’t tell the original source of authorative-tagged information

Warning: I haven’t organized my thoughts around any of this at all, and I have a affection-starved cat interrupting me for more pets every few minutes, so this is likely to be one of my less coherent posts


So, I just finished watching a movie about the Milgram experiments. The first thing that occurred to me is that the reactions people had to the experiment make it very clear that they were not in unified agreement about continuing to push the button – in fact, all sorts of subnets were asserting that they should stop. It does occur to me that in general natural neural networks must have some willingness to trust authority (at least properly authenticated internal authority) or the results would be utter chaos. And in addition, at times it’s a good idea to trust external authority, at least insofar as avoiding the lion that the sign is warning you about. However, clearly you shouldn’t trust *anyone* who claims to be a authority, or you’ll end up supporting the Trumps and Hitlers of the world as they do truly abysmal things – it is clear that people are willing to abuse our susceptibility to instructions from authority to have us do all sorts of things that shouldn’t be done.


On the other hand, neural networks need to be willing to accept data from outside if we are to ever be able to go beyond what one person can discover in a lifetime – the susceptibility to authority is likely a part of the same process which makes us able to learn from the mistakes of others. So how does one retain that functionality while still telling the government “Hell, No, I won’t go” when they are asking you to bomb Vietnam over some insane war over ideology of resource allocation? I’m not exactly sure.


I do have a hunch that being aware of the Milgram experiments make one less likely to be susceptible to that sort of influence. So it is possible to build a informational immune system of a sort. We likely also end up building informational immune systems that protect us from our own worst ideas – well, those of us who don’t end up being Jeffry Dahmer.


Now, this gets into a common digression for me. It’s obvious to me that I have a fundamentally different view of what ‘good’ is than many people. In some cases, I can get inside their heads even though I don’t agree with them, and in other cases, I feel much like there are aliens roaming among us. Like, I can understand the right wing fear that we can’t afford to feed and house and clothe everyone, or that if we did so we would damage their self reliance and the further evolution of our species, and even the mindset that it’s not fair that someone would be allowed to stay home and smoke weed (or whatever). I don’t agree with any of these views, but I can understand their genesis. However, at some point along the ideological spectrum, I stop being able to even track why someone would feel that their definition of good was good. I can’t get inside the mind of the person who thinks we should stone gay people, or the guy advocating for legalizing rape (yes, there really is). In general, I can’t get into the heads of the well poisoners who have to drink from the same well.


This is a real phenomenon. I see it over and over.  Now, in general, I think people should stop well-poisoning even when it doesn’t affect them, and I think it’s awful that people do it – more on this later, especially on the subject of sex and well-poisoning – but the ones who I really can not understand are the ones who want to poison the well they drink from. If you are advocating violence against minorities, that’s what you’re doing, because sooner or later, you’re going to be that minority. If you are advocating violence in general, that goes double. Every time I see riots over police shootings and they are not carefully and well targeted against the police, but rather are against the communities who were already hurt by the police shooting, I wonder – and I’m sorry, but it’s the truth – what is wrong with these people?


Now I have, over and over, seen that anger leads to bad and irrational decisions. In general, the people I know who get angry when they have computer problems can never, ever solve them – and sooner or later they lose me as a resource in that because I don’t like to be around irrationally angry people. And I assume that the rioters are suffering from irrational anger but I can’t help but wonder, to bring this back to it’s original topic, are they also suffering from a bit of the milgram effect? Do emotions like anger and fear make us more susceptible to being Milgramed? Or do a much wider range of emotions make us more susceptible?


Back to the subject of NNNs, I am really wondering, for most subnets in our mind, can they even tell the difference from inside signal and outside signal? How equipped are they to evaluate the validity of a order and the source of said order? I also wonder, for all the people who clearly wanted to stop increasing the voltage but did not, how difficult was the inner struggle between the parts of them that wanted to do the inately right thing and the parts of them that wanted to do what has been externally programmed to be the right thing? There’s no doubt that we’re externally programmed to respond to authority with obedience – in America, it’s a pretty common theme that if you don’t, the cop whips out his gun and shoots you, and is told, at least privately, good job officer. There are all sorts of authorities wielding power over us, everything from bad grades to unemployment and starvation and having nowhere to live to being physically abused – and we do live in a system that has pretty well built a way of programming us to be obedient. And yet, I think there are parts of us that refuse to participate in the horror show that we’re asked to engage in – soldiers often come back from blowing up other people at government command with severe psychological damage, for example, that suggests that the minds of many of us are not really geared for the idea of being awful. And clearly, most of the people participating in the Milgram experiment resisted to one degree or another – very few joyfully and willingly cranked the voltage up to 450. They just didn’t resist *enough*.


Now, I keep advocating that psychology needs to throw away the storytelling and study what’s happening on the iron – and part of this is that psychology is often obsessed with the idea that we are single coherent individuals when science suggests that while we have the experience of being single, coherent individuals, we’re actually many, many collections of subnets. For those of you who haven’t read about them, the experiments with cutting the corpus collossum strongly suggest we’re the aggregate result of many, many subnets. At least on this track and in this world – I have had experiences which I can’t easily explain but which suggest that we’re not always at the whims of our hardware in quite the same way.



Leave a Reply