Warning: copy(/home/sheer/public_html/wordpress/wp-content/wflogs//GeoLite2-Country.mmdb): failed to open stream: No such file or directory in /home/sheer/public_html/wordpress/wp-content/plugins/wordfence/lib/wordfenceClass.php on line 2073
Never been one to let the carrier drop ANN Archives - Never been one to let the carrier drop

Archive for the ‘ANN’ Category

The dangers of certainty

Thursday, August 12th, 2021

So, in reading “Thinking fast and slow”, I’ve come to think of the human brain as having two modes. One of these modes involves some voodoo that we might call ‘free will’ – it doesn’t execute quickly, but it is easily changeable. The other involves hardcoded, compiled neural interconnects – it’s the reflexes that make you hit the brakes when the car in front of you stops – and, I am coming to suspect, the hard-wiring that makes you insist “Of course Jesus hates gays and would support hurting them in any way possible!” and other equally absurd interpretations of the bible – not to mention “COVID is a hoax and I am free to not wear a mask” even as you read of others who took that stance dying.

I talked in a previous article the idea that because multiple signals pass through the same set of subnets our minds may protect even wrong ideas because they are necessary confluences of signal. I’ve also come to think more and more about the actual physical restrictions of changing the physical wiring – neurotransmitters, proteins, all sorts of actual, limited resources come into play when unlearning something. Therefore, there is a biological reason we might defend wrong ideas.

Now, there’s a couple of directions I’d like to go with this. At some future date I will discuss the tendency of certain Christians to think hate is love – I think I’ve talked about that before but the above does point out why there’s probably not a lot of point in trying to bring to their attention that they are just plain wrong – they’re not going to be capable of learning, their firm belief has translated into neural wiring and they *can’t* unlearn – even if Jesus himself came and told them they were wrong, they wouldn’t be able to accept and integrate that.

This same problem exists in political ideology that is carefully grounded in fiction. We’ve talked about how conservative media (especially Fox) has been lying for a long time – but the adherents to it think that the lies are facts, and have formed hard structures encoding them. Again, they can see over and over the data proving that trickle down economics do not work, and continue to push for it. They can see over and over that automation is taking their jobs, and continue to blame the immigrants.

Part of what I’m trying to wrap my head around is there’s no point in being angry with them. Both groups of people mentioned above are contributing to making the world a worse place, but there’s no way they can stop. They can’t even be aware of the fact that they’ve got deep structures that are counter-factual stored.

Now, there’s a lot of things that I talk about as being ‘unknowable’ – things like our purpose here, what happens after we die, what deities there might be (clearly if there is someone in charge they don’t want us to know that as the amount of work they’ve gone to to maintain plausible deniability is absurd). And I try to avoid having certain beliefs about those unknowables, because I’d rather not know than have absolute faith in something that’s wrong, especially if that absolute faith led me to encourage abuse of others because I thought, in my limited view of the universe, that their choices were “sin”.

I have noticed that over and over people create God in their image – limited and full of hate. One of the things that I’ve mentioned to various Christians trying to convince me that I’m going to hell is that I tend to think I’d be better at imagining God than they would because of my life experience – I’ve built worlds (in games), I’ve coded somewhere near a million lines in a wide variety of languages, I’ve used evolutionary algorithms, I’ve read thousands of books and studied many subjects. Now, I’m not claiming I’m God – far from it – but I think I’d be better able to wrap my head around what a deity might think like than most of the people who claim to know the mind of God because of a bunch of words written by people wandering around in a desert 2000 years ago.

Now, if God would like to change my mind about this, I’m certain *e knows how to reach me. I’m open to other ideas – but you are not going to convince me that the Bible is the word of God (except in the very general sense that if God is infinity, all books are the words of God). You will convince me that the words of Jesus contain wisdom – and the primary message is “Be excellent to each other”. Them who would like to hate on those who sleep with different folk are failing to be excellent to each other, therefore I am clear on the fact they have failed to grok the message of Jesus. Often it’s because they are creating God in their own, hate filled, confused, lost image. But you’ll never convince them of that. Why? See above.

Mania, islanding, and the Shannon limit, and stepped psych med dosing

Sunday, June 20th, 2021

This is going to be a article about one way mental illness can occur, with some side digressions into how we do not do a very good job of treating this particular way mental illness can occur.

So, those of us who don’t believe there’s some sort of voodoo going on in the human brain understand it to be a very, very large neural network. It has 10^11 neurons, broken up into probably somewhere around 10^8 subnets, and those neurons have both excite and inhibit inputs and are also affected by the chemical soup they live in in a number of ways – including that there is a limit to how many times a neuron can fire before it has to uptake chemicals that permit it to fire because firing uses up resources, that a bunch of neurons firing near each other are all working out of the same resource pool, and that the presence of various other neurotransmitters (and even some more exotic things like moving electromagnetic fields) can affect firing probability.

It is also possible there is additional voodoo going on – I’ve seen arguments that the brain is using relativistic effects, that it is using quantum effects similar to a quantum computer, that it is a lies-to-children simplified version of the actual system brought into Earth to help us understand, that it is actually a large radio receiver for a complex four-dimensional (or more) wave, and other less probable explanations. We can discuss things like how this relates to the soul in another article – this one is based on the idea that yes, it’s real hardware, and yes, it follows real physical laws.

One thing commonly commented about people who are experiencing mania is that they appear “fast”, sped up, and indeed you can observe in some percentage of manic folks a increase in the frequency and amplitude of some of the various “clocks” the brain uses to help synchronize operations (i.e. alpha and beta waves, which themselves are somewhat mysterious insofar as a EEG is only picking up a gross average of millions of neurons and even that is not likely to be too accurate given that the electrical signals have passed through the blood-brain barrier, bone, etc)

Anyway, it seems completely reasonable to think that during periods of mania, signalling is occurring faster. One clear law of nature we’re aware of is referred to as the Shannon limit, and it’s the idea that for any given bandwidth and signal to noise ratio there is a maximum signalling rate that can be successful. Attempts to exceed the Shannon limit (by signalling too fast) result in a breakdown of communication – the exact failure mode depends on the encoding method being used and some other variables.

I am fairly clear that some of the undesirable behaviors and effects of mania are the result of some of the signal pathways involved in connecting the various subnets that make up a person’s decision trees experiencing signalling that exceeds the Shannon limit, thusly resulting in islanding. Side effects here can include loss of generation of memory (and apparent ‘jumps’ in time from the manic person’s POV), extremely poor decision making akin to having inhibitions suppressed by alcohol, and all sorts of interesting delusions. I think all of this is what happens when some of the longer inhibitory circuits stop carrying data, or meaningful data, because they are signalling beyond their Shannon limit and thusly the signal arrives at the other end either hopelessly smeared or of inadequate amplitude to cause the neuron in question to receive the excitory or inhibitory input.

In my case one clear case of islanding that has been repeatedly observed is the presence of multiple personalities. This is not that I have DID but rather that this is what happens when islanding occurs in a neural network – you can think of a natural neural network as somewhat holographic and indeed a number of experiments (too many to document here, but I can write a separate article about this topic if there’s interest) bear this out.

(I should also clarify for those of you who aren’t familiar with operating a electrical grid – “islanding” occurs when individual parts of the system are out of touch with each other – in the case of the AC grid this would be because they’re physically disconnected or too far out of phase with each other to allow a connection to be made – neural networks can display similar behaviors and it’s possible to experiment with this with ANNs simply by progmatically disconnecting bits of them. We’ve had chances to explore a lot of the different ways islanding can behave in a natural neural network because of stroke, head injury, various experiments such as cutting the corpus callosum, and the like )

It is possible that this state is even a evolutionary advantage as having something which causes some members of the tribe to take risks they would not ordinarily take may be how we got to, for example, understanding that lobsters and crabs are edible. There are certainly advantages to taking intelligent risks.

Of course, one problem we have with this is that often people in this state will commit crimes and while they are clearly not guilty by reason of insanity, our legal system loves to punish folks and is ever eager to make more money for the people running private prisons by putting them in jail. (It’s also extremely profitable for the lawyers). I suspect the majority of nonviolent criminals are just unable to manage the imperfect nervous system evolution has given us – survival of the fittest turns out not to be the best fitness function for creating creatures that are well suited to today’s world – and also a number of them are probably victims of abuse from predecessors that also suffered from similar problems.

In the meantime, the solution that I have found – using stepped doses of a psych med stepped according to how fast the system is trying to run in order to prevent revving past the Shannon limit – seems to be frowned upon by western medicine. They prefer the ‘I have a hammer so every problem is a nail’ approach of using a steady state dose no matter where in the cycle the individual being dosed is. The net result of this tends to be that the best medications for depression are hugely inappropriate when not in a depressed state and the best medications for mania are hugely inapprorpiate when not in a manic state – therefore the patient ends up overmedicated and often decides to go off the medication because of the damage to their quality of life the medication is causing.

On the other paw, using a stepped dose – this is far easier when the cycle is predictable as mine is but can probably be done via measuring various metrics if the cycle is unpredictable – I don’t know, I haven’t had a oppertunity to test it – leads to very good results. There is no overmedication during periods that are not either manic or depressive peaks, and in the case of medication that suppresses mania you avoid amplifying depression – and also the drug does not lose control authority because it is not being overused.

(In this article, when I speak of a stepped dose, I mean a dose scaled to the need that steps up as the system tries to run faster and down as it returns to normal. One advantage I have that may or may not work with all people is I can tell how fast I’m running by how long it takes to get to sleep, and can step the dose up until I’m able to get to sleep within a hour of initiating sleep)

I should also mention that even with a stepped dose it is very helpful to have some complex activity to engage in during manic periods in order to keep a load on the engine, as it were. I suspect it helps a lot to have activities that follow hard laws (programming, electronics, etc) in order to avoid drifting too far into mystical/magical/delusional thinking, which is another risk involved with mania.

The broken windows theory of police abuse

Friday, June 5th, 2020

So, I’ve been thinking a lot about police abuse of power lately, for reasons that are probably obvious to anyone living on Earth in 2020. (For those of you *not* living on Earth in 2020, a police officer strangled to death a citizen who had committed a trivial offense. The citizen was of the skin color that is systematically abused on earth and the officer was of the skin color that is traditionally associated with privilege and power. There have been widespread uprisings against both the skin-color aspect of the crime and against the police state in general and the idea most police seem to have that they are above the law.)

One of the things I’m thinking about is how we need to send the message to police much more often, and in much stronger ways, that they are not above the law. I think a AI needs to ride along in every police cruiser, and every time a cop uses his lights to skip a light, or changes lanes without signalling, or otherwise ignores the law because he or she think they’re above it, they should accumulate some form of fine or logged history of abuse. Too much abuse, and they should be fired.

The problem is that neural networks learn entirely too easily, and so often we don’t even know they’re learning. The cop learns at a subconscious level that they can break the rules they enforce on other people and nothing will stop them. Eventually they think so much that they’re above the law that they start murdering.

One of my thoughts about this whole matter is that power and responsibility must, as Heinlein pointed out, balance. When someone has power without responsibility they become progressively more abusive. this article documents how power causes brain damage. I’ve seen police posting on facebook who obviously are deeply brain damaged – they think all citizens should kowtow to them even as officers commit murder, that protests of murder should be met with progressively more abuse. And of course, that’s exactly what we’re seeing.. whenever protesters and cops meet up, the cops are using tear gas and batons even when the protesters are doing nothing wrong. THe police are angry that we dare challenge their authority, and part of why they are angry is they have brain damage from being police for too long.

Anyway, I do think that we all see officers ignore the law driving around in their cruisers all the time. And I also think that doing so emboldens them to think they are above the rest of us and can do anything they want to the rest of us. I think we need to continue to make it clear to them that they are a part of us but that they are not above us and that the same laws apply to them as to us. While it’s understandable that they might break some laws when in pursuit of a criminal, they should scale that to match the crime. THe fact that the cops *always* catch the criminals – even when all the criminals did was speeding – suggests to me the cops are abusing their powers. I suspect most police would risk people’s lives in order to make sure they bust someone for the crime of running away from the police. We know that they feel they should shoot at people who run away if they are of certian skin colors. I know that the only time I’ve been physically abused by the police, it was for the crime of not stopping quickly enough – I did stop, but it took about a minute.

One of the things I don’t think the police have thought about is that there is a feedback loop here. I run from the police when I am in a manic state because I am afraid of them. Being afraid of them is reasonable because over and over I have seen that the police kill citizens. I know from the statistics that as a person with a mental illness, being killed by the police is statistically one of the most likely ways for me to die. The fear is reasonable. And yet, the fear angers them. As the police abuse more and more, more and more people will be afraid and all of this will continue to grow worse.

I have been repeatedly threatened by the police inappropriately. As such, my opinion is that if possible, we should fire all of the police and start over. I believe there is a culture of abuse in America’s police departments where there needs to be a culture of safety. I believe most citizens already know this. And I think one thing that shows this is how often police break laws in ways that threaten the public when they are driving around in their cruisers. One of the things I have seen repeatedly on my small low traffic street is police driving at double or more the speed limit – not because there is any need, but just because they are “above the law”. I think a AI monitoring their behavior would be hugely helpful and I do not think such a AI would be difficult to create. Unfortunately we have a very large corpus of police behaving badly to train it on.

There are two other large obvious problems. One is that Americans are trained via propiganda to think well of the police. Most Americans have never thought about the absurdity of charging someone with a crime for selling a loose cigarette in a train station, or stealing $100. Americans think it’s reasonable to do a year of prison for stealing $100 – life is cheaper in America than in any other third world country I know of. The people who make the laws are not actually thinking about making a utopia, they are thinking about how they can get reelected and keep their cushy job. I know as a progarmmer that it’s difficult to write code that works well under all circumstances even after careful consideration and with all the best tools for writing and maintaining code humanity can invent. Laws are code for humans and they do not run in a testbed, they are not debugged, and they are enforced at times by angry thugs who are also members of white power organizations. We need a better way of writing and testing laws, and we need a good way to delete laws, and we badly need for the police force to be on the side of the criminal rather than on the side of the politican until we reach a point where our laws are balanced and sane. Elsewhere I have made other criticisms of our criminal justice system, and I think it needs reformed top to bottom with a ‘throw it all out’ mentality that only saves the very best bits – and we should ask for the help of other, more successful countries when we do this

Anyway, the point is, Americans are by and large already inclined to side with the cops and seldom realize how unreasonable most of American law is. And it only requires one holdout on a jury to avoid a cop being convicted of a crime. In the meantime, the supreme court has basically said, “Cops can murder if they want to. They have qualified immunity” – and you can safely bet every police officer is told about this early on in their career. And the police union will hire the best emotional-button-pushing lawyer to get them off. My theory is when a cop is convicted, the jury should be entirely made up of criminals. Yes, it’s a double standard. But police should be *better* citizens, at least when it comes to following the laws, than the rest of us. And they aren’t. They don’t follow the laws at all. We know they plant evidence, because we’ve seen them do this on cop cams. We know they murder. We also can guess that the type of person who *wants* to be a cop, who likes the job, is probably deeply flawed in a lot of ways.

Now, I can cite counterexamples. I do not think any of this is true of every cop, and I think only a very small percentage of cops are willing to actually murder. But the percentage is getting larger, and the powers that be are encouraging further abuse, and I do seriously think every cop who speeds just because he can should face the same fines the citizens he stops do. (It might be amusing to make the enforcement of the fines for places where the AI detects the cop committing offenses aligned to times the cop busted citizens.. if you don’t enforce speed laws, you can speed, but the minute you bust a citizen for 5 over, you accrue fines for every time you drove 5 over when you didn’t need to. It might lead to a *awesome* outcome – police refusing to enforce unreasonable laws)

Anyway, I think there is universal agreement that we need to change things. I think the changes needed are much deeper than most people think, and I think that our memetics are awful – we have been taught fundamentally wrong lessons that make us willing to shoot at a burgler to avoid having our TV stolen, for example. And we have allowed our police to turn into a dog that worries the sheep.


Monday, January 7th, 2019

Only one musical post in all of 2018. Going to have to do better in 2019. I tracked ten different songs that I didn’t think were good enough to release in 2018, and I’ve tracked three so far in 2019. I’m not sure if I need to turn down the lint level, or if I’m just working towards another plateu. On the other paw, it’s not like I get emails clamoring for more of my music or anything 😉

One thing I’ve really been feeling is the sense of missing people. I miss Phoebe, I miss $PERSON, I don’t really ever seem to get over the people I’ve lost. I miss my uncle joe.. I’ve even reached the point of missing my dad, who is still in my life. (I have set up a camping trip with him – I’m not so stupid as to not fix the ones that can be fixed).

One of the things with Phoebe is remembering and regretting all the stupid things I said, especially during our break-up. I know that I participated in breaking that friendship too badly to be repaired and I wish that I had a time machine so I could do things somewhat differently.

Ah well, we go on. What other choice do we have?

I think part of what bothers me about missing $_PERSON at this point is that it’s been so long since I had any kind of contact that I have *no* idea who she is. At some point your copies of copies of memories have no real reliability to them at all, and generation loss has pretty much etched that one away to where it’s nothing but a guess. That combined with the sense that the things that pushed her away were not really me – I mean, they certainly weren’t who I would choose to be and they all occurred in extreme mental states.

Recently I spent some time talking to a facebook friend who seemed to have been experiencing a extreme mental state of her own. A number of my friends criticized me for this, or at least expressed doubt that this was a wise use of my time, but I am fairly sure that what I was doing fit nicely inside my philosophy of ‘be excellent to each other’, and that if more people behaved the way I do, the world would be a better place.

and I have to admit as I research neural networks, my half – and often scarred memories – combined with blackouts – of the periods where I wasn’t myself are telling. I’m fairly certain what I was experiencing was islanding – very large collections of subnets, large enough to be able to respond to stimuli but not large enough to sustain consciousness. This brings up the interesting question of, in DID, are the alters conscious? I’ve always assumed that they are, but then I’ve been doing kitteny neocortex research that is making me question that assumption.

One of the things I’ve realized is that there’s no way we currently know to know whether a neural network is having a conscious experience or not. A NN will learn, and respond to stimuli based on what it’s learned, whether or not the ‘magic’ of consciousness is there or not. At this point I tend to agree with the person who theorized that consciousness is what information feels like when it’s been processed, but I think that’s only true in a very specific context which likely has to do with the way temporal memory works. However, in building my unsupervised learning system for the kittens, I found myself implementing something very similar to short term memory because in order to do unsupervised learning in the model I’m currently using, you have to let LTP create the bindings first, *then* learn the lesson. You also have to keep track of previous lessons so you can unlearn them if they turned out to be wrong. (At least, to solve my particular problem that I’m working on at the moment you do).

I haven’t really come up with any new years resolutions – I have a vague sense that I’d like to exercise more, vape less, eat less, write more music, and generally try not to break anything critical about my life.

From a facebook discussion : free will

Thursday, November 23rd, 2017

Well, the problem I have with saying I have free will is multifold. A: I am not sure I exist. “I” as a single entity might well be a illusion since I appear to be a cooperating collection of subnets, and experiments like cutting the corpus callosum argue strongly that I am not a single ego, that this is a illusion. B: I am not sure, if I do exist, that I’m not deterministic. Experimenting with artificial neural networks, I note that they tend strongly towards the deterministic unless measures are taken to keep them from being deterministic. C: I am not sure, if I do exist and am not deterministic, that it is free agency and not a RNG or random noise that is guiding my actions. And yet, the idea that I am a person wandering around taking actions of my own free will is very compelling. Especially when I start discussing the matter which seems very meta


Tuesday, November 14th, 2017

So, one of the things I’ve been learning about is ANNs. I’ve tried playing with several different frameworks and several different topologies, and one of the ones I’ve been playing with is Darknet.

I’ve been trying to train a Darknet RNN on a corpus generated from all the text in my blog. So far the results have been less than stellar – I think I need a bigger neural network than I’ve been using, and I think in order to do that I need a bigger GPU because I’m running out of patience. I was astonished to discover >1 teraflop GPUs are now in my price range, so I’ve ordered one.

I’m hoping soon to have simSheer available as a php endpoint that people can play with. All of this is building up to using Darknet for some other purposes, such as image recognition.

It’s interesting to think that even if simSheer manages to sound like me, it will be doing so with no sense of aboutness at all – well, I *think* it will be doing so with no sense of aboutness. It has no senses, and no other data to tie my writings in with, so I don’t think that any of the neurons in it can possibly be tagged with any real world meaning. Or can they? This is probably a subject that some famous philosopher has held forth on and I should probably go try and find their works and read them, but in the meantime it’s certainly fun to think about.

I really wonder to what extent the aboutness problem (borrowed from Stephenson’s Anathem) applies to NNNs. Would the cluster I have for the concept of love even remotely resemble the clusters other people have? What would the differences say about me and them?

What I’d do if I could

Sunday, June 18th, 2017

WARNING: This gets into some serious blue-sky territory

So, recently, I mentioned that I wouldn’t give power to certain conservatives who are in favor of criminalization of marijuana – and I think you all know I don’t smoke it but I’m a ally for those who do – and SS asked if I favored a America of exclusion.

Well, yes and no. I gave him a very short answer, which is that I favor a world where no one has any power over anyone else, but I thought I’d give the longer answer which is how I’d implement it if I were king.

I would load a hypervisor in everyone’s head, and network everyone together. Their bodies would be decoupled from their conscious experience. All physical possessions would be neural software – they would be able to have the same experience they’re having now, or wildly different experiences – a lot of experiences denied to all but a few would become open to everyone, such as the experience of being a rock star (simulated crowd unless you get *really* good at it and real people want to come see you, but I’d be into playing a simulated crowd, I’m not picky..)

A lot of experiences, like being in massive amounts of pain as your body fails, would go away. You’d have a interface for blocking people or locating new people you’d like to be in your life, for defining what you’d like your homes to look like and switching between them, for adding possessions – look at the video game The Sims, and you get a good idea of a lot of the interface you’d need. And you could fly with the blue angels, or be a rock star, or go mountain climbing, or drive in NASCAR, or whatever.

Now, at this point, “you” are a virtualized entity running under a hypervisor. Guess what this means – we can move you from body to body! You’d very likely be immortal as long as our society holds together. I’m assuming if Heaven (or $RELIGIOUS_UTOPIA) exists, this is part of it. I sometimes think we’re already in it and we’ve lost the instruction manual.

Anyway, you could be a despot or a fascist leader if you want – but, similar to being a rock star, you probably only get to have subjects if you’re good at it. Otherwise, it’s simSubjects for you. But I’d probably include code to allow you to forget that fact if you wanted to, so you could *think* you were ruling the free world. I’d also include ‘conditional virginity’ – (note that a lot of these are NOT my ideas, but the ideas of someone I talk to – $person’s future self, so to speak) so you could forget a experience you had temporarily so you could have it for the first time again.

Now, there are some serious challenges. We’d have to really master security in information systems, or we’d end up with people with all kinds of nasty virii loaded. (Well, we kind of have that situation now, don’t we ;-)). However, the advantages are pretty staggering. Among other things, a separate much smaller collection of neural code running under the hypervisor could do whatever body-care things needed to happen including farming, feeding, etc. In the meantime, you could eat a ten course meal if you wanted to and never gain a pound.

In addition, you could either choose to learn things ‘the hard way’ for the joy of the journey, or ‘matrix-style’ – many times I think you’d want to learn them the hard way when they were related to creating art, because that is the only way it would be “yours” and not just the group skill in playing the guitar or whatever. And some things like learning athletic skills the journey is part of the fun and not to be missed.

Anyway, learning how to write code for natural neural networks and get it to run correctly is a big ask. But that’s where I’d go with my utopia, Steve.

Western Science

Thursday, January 5th, 2017

One of the problems I keep thinking about is that western science has one major flaw.

They don’t know what they’re measuring *with*. Until you know the answer to that question, you don’t know what you’re measuring. We don’t yet understand what we are – at least, if the hard problem of consciousness has been solved, no one has told me the good news. I’ve heard a lot of theories, but I haven’t heard one I’d call solid enough to call plausible yet.

In other words, dear scientists, please bump the priority on neuroscience and both ANN and NNN research. Dear warmongers, please stop wasting money blowing shit up until we can solve this more important problem. Kthx, Sheer.

Fun discussion about ANNs on facebook

Saturday, December 10th, 2016

Jonathan Sheer Pullen: Curious why you say that? If you extrapolate 15 years out on Darpa Synapse and it follows Moore’s law, we’re there.

GA: Jonathan Sheer Pullen we’re not even a little bit close.

Here’s a brief synopsis.

(1) any intelligence must be unconstrained in what it can think

(2) any intelligence must be free to choose and pursue its thoughts

(3) any intelligence must be capable of deception

(4) an intelligence is presumable conscious

So we have a major problem, because such an entity would be free to choose whether to cooperate and it would be capable of deception. It would be aware that it is not human and therefore may pursue its own interests as a machine.

So it would be strange to imagine that such an intellect would be motivated to work on problems we humans think are important. There’s little incentive to do so.

Then there’s the major problem of verifying that a machine might be more intelligent than humans. Such a system is impossible to test and coupled with the ability to lie, it’s a non-starter.

We will not build a messiah.

Jonathan Sheer Pullen: You up to have some fun kicking this one around a little more?

Jonathan Sheer Pullen: Any neural network has to have definitions of success and failure in entrainment. This enables us to do things like giving our intelligence a powerful desire for, for example, human artwork. This might not be the most moral thing ever, but it is something we could do. This gives us something to trade with it – offering us the possibility of befriending it.

Jonathan Sheer Pullen: As far as knowing whether it’s smarter than human – well, I’m of the opinion that if you have something with more neurons than human, and you entrain it with a bunch o’ data, it’s going to be smarter. But I think we’ll know just by talking to it.

GA: there are ethical boundaries that humans will find difficult if not impossible to cross.
GA: you won’t be able to distinguish genius from madness or deception.
GA: this has already been shown by the time it took to verify the proof for Poincare’s Conjecture, and that was simply another human. It took 5 years to confirm the proof.

Jonathan Sheer Pullen: Well, we have that problem with humans, too. My best guess, though, is that we *will*. Consider the induction motor. Not one in a hundred million of us could have come up with the idea – but once it’s been come up with, it’s obvious to most of us how it works and that it’s brilliant. I think that truth tends to ring true – to quote HHH from Pump Up The Volume, the truth is a virus – or rather, it tends to be viral.

GA: it isn’t a matter of truth, it’s a matter of trust for which you have no basis.

Jonathan Sheer Pullen: Well, that’s a case of trust, but verify. And to be sure, building something smarter than we are is a risk – it’s a pandora’s box. But my experience with humans suggests we *like* opening pandora’s box.

GA: really. It’s like trying to build a chess playing computer when don’t know how to play.

Jonathan Sheer Pullen: GA, I don’t really see it that way. NNNs naturally evolve towards whatever problems you throw at them – I don’t see any reason to think ANNs would be different. It is true that we’re still learning about how to best utilize ANNs, topologically, but I feel comfortable that by the time we can make a ANN that big, we will also know what to wire to what, and what to use as attractors

GA: In any case, all this presupposes that a machine intelligence is even interested in human problems. That in itself would be suspicious because any entity would be maladapted occur placed another species above its own interest.

Jonathan Sheer Pullen: It’s not a problem we have any real data for. We’ve never had control over the attractors for a intelligence before, unless you want to think about things like the experiments in the 50s with embedding wires in the pleasure centers of mental patients.

Jonathan Sheer Pullen: We do know we’re able to do things like facial recognition and word recognition by using control of attractors in smaller ANNs

GA: I disagree. You’re assuming you know the outcome. I’m not arguing about whether you can build something. I’m talking about what it is after you build it and it isn’t what you expected.

Jonathan Sheer Pullen: I don’t know the outcome. I’d just like to find out. I hardly think it’s going to get out of it’s box and turn into skynet. My concerns are more that this would turn into another ugly form of slavery. If you’re a ST:TNG fan, “The Measure Of A Man” discusses this topic nicely.

GA: the outcome I’m referring to is when a system is built. Play chess? Trivial. We know the answer.

Jonathan Sheer Pullen: I’m more thinking with a larger neural surface area, it might be able to see patterns in world economies and governments, suggest better designs. And if it also stood to gain from any improvements that were made..

GA: things like facial recognition are flawed concepts and presumes human direction of activities. Why should an intelligent machine care?

GA: that’s the ethical problem. Who is going to pursue a solution when a machine may tell you to let a million people die?

Jonathan Sheer Pullen: Again, I refer you to the idea that we would be controlling it’s attractors. If you have a ANN, and you control it’s attractors, you control what it’s trying to achieve. You define success – when neurons fire together and wire together.

Jonathan Sheer Pullen: In humans, this is largely controlled by things like the entrainment period in which facial recognition of the parents (among other things) makes the child NNN know when it’s succeeding. Over time it gets more complex.

GA: if you exercise control then you cannot have true intelligence. That’s one of my points.

Control constrains options, including possible solutions.

Jonathan Sheer Pullen: Eventually it would likely evolve to control it’s own attractors, just as a child turns to a adult and starts controlling their own definition of success and failure.

Jonathan Sheer Pullen: (basically, we’d wire the attractors into one of the outputs of the network)

GA: exactly, which is why it would be insane to turn over control to an intelligent machine.

Jonathan Sheer Pullen: But.. just as a child will retain a lot of the lessons of the parents, it would still retain the lessons of whatever goals we fired attractors on early in the process.

Jonathan Sheer Pullen: Control of it’s attractors? Not at all. I’m not saying we’ll wire it to the internet and encourage it to hack the planet. It would just be a person at that point, just a different type of person than we are – just like a dog is a four-footed person, but a different type of person than we are.

GA: what goals? Human goals? That would be like your parents raising you like a dog and you thinking you’d be content with that.

Jonathan Sheer Pullen: Eventually it would likely trasncend human goals.. therein lies the risk. You’d also almost certainly have to make several of them and let them communicate with each other from ‘childhood’, or they might be very lonely.

GA: if that’s the case; just another person, then what’s the point? We already have many choices that we don’t want to listen to. It would be a big disappointment of all this work simply produced another opinion to ignore.

Jonathan Sheer Pullen: Well, see above, I think that it would say things that were obviously profoundly true. I think in any case it’s worth finding out.

GA: if you expect them to be lonely, then you’ve already considered that they might not be interested in helping us at all

Jonathan Sheer Pullen: Of course. They might not be beyond their ‘childhood’ when we control their attractors. We don’t know. It’s not a situation that has come up, as far as I know.

GA: you wouldn’t know of it was true without a comparable level of intelligence. It could just be gibberish intended to fool you.

Jonathan Sheer Pullen: Let’s go back to my point about the induction motor.

Jonathan Sheer Pullen: Do you know how a induction motor works?

GA: why would that have anything to do with an intelligent entity?

Jonathan Sheer Pullen: The point I was making is that anyone can understand how it works once they’ve seen it, but it takes a Tesla to see it for the first time.

Jonathan Sheer Pullen: And I think you’d find similar things with our hypothetical superhuman ANN intelligence.

GA: again, you’re assuming that it cares

Jonathan Sheer Pullen: Well, in it’s childhood, we know it will, because we’re controlling the attractors.

Jonathan Sheer Pullen: Also.. and this goes *way* off into immorality.. it’s going to be utterly dependant on us, because it runs on electricity! 😉

Jonathan Sheer Pullen: Which once it understands that may lead to it going completely crazy.. what Heinlien suggested in Friday.. or it may lead to it wanting to get along with us

Jonathan Sheer Pullen: And I suspect the defining factor will be how good it’s conscious experience is.. how much it likes it’s life.

GA: now you have the recipe for a pissed-off intelligence.

Jonathan Sheer Pullen: Or a grateful one.

Jonathan Sheer Pullen: It depends on whether it’s having a good ride or a bad one, so to speak.

GA: and if it’s bad?

Jonathan Sheer Pullen: Along the way to building a superhuman intelligence, we’ll start with something like a cat-sized intelligence – that in fact is Synapse’s official goal.

Jonathan Sheer Pullen: And along the way we’ll learn how to make the experience of the ANN good

GA: I don’t accept that argument. The people working on it don’t even know how to make their own lives good.

GA: Anyway, I do have to run. Got some horses to feed 🙂

Jonathan Sheer Pullen: Good rag chew.. thanks for playing along

Are larger neural networks stable?

Tuesday, February 2nd, 2016

So, as we approach the singularity – and all indications are that in about 15 years we will be able to build a mind bigger than ours, if Moore’s law holds – one interesting question is whether a larger neural network than us would be stable.

This is a subject that, if Google is to be believed, is of much scholarly interest. I’m still not at a place to evaluate the validity of the discussions – I’m still working my way through a full understanding of neural coding – but I think it’s a interesting question to be asking.

One presumes that some sort of optimization process took place (either via evolution or design – or quite possibly both) in determining how large the human mind is – but whether it was a decision about stability or a decision about power consumption remains to be seen.

In a neural network of fixed size, it seems clear that you have to make some tradeoffs. You can get more intelligence out of your 10^11 neurons, but you will likely have to sacrifice some stability. You can also make tradeoffs between intelligence and speed, for example. But in the end, humans in general all have the same number of neurons, so in order to get more of one aspect of performance, you’re going to have to lose some other aspect.

When we start building minds bigger than ours, the question that occurs is, will they be more stable? Less? Will more neurons mean you can simultaneously have a IQ of 2000 (sorry, Holly!) and be rock solid, stable, and reliable? Or will it turn out that the further you delve into intelligence, the more the system tends to oscillate or otherwise show bad signs of feedback coupling?

Only time will tell. As the eternal paranoid optimist, my hope is that we will find that we can create a mind that can explain how to build a much better world – in words even a Trump supporter can understand. But my fear is that we’ll discover we can’t even build a trillion-neuron neural network that’s stable at all.

We also have to figure out how we’re going to treat our hypothetical trillion-neuron creation. Clearly it deserves the same rights as we have, but how do we compensate it for the miracles it can bring forth? What do we have to offer that it will want? And if we engineer a need into it so that it will want in order to have that need met, what moral position does that leave us in?