Archive for February, 2016

Are larger neural networks stable?

Tuesday, February 2nd, 2016

So, as we approach the singularity – and all indications are that in about 15 years we will be able to build a mind bigger than ours, if Moore’s law holds – one interesting question is whether a larger neural network than us would be stable.

This is a subject that, if Google is to be believed, is of much scholarly interest. I’m still not at a place to evaluate the validity of the discussions – I’m still working my way through a full understanding of neural coding – but I think it’s a interesting question to be asking.

One presumes that some sort of optimization process took place (either via evolution or design – or quite possibly both) in determining how large the human mind is – but whether it was a decision about stability or a decision about power consumption remains to be seen.

In a neural network of fixed size, it seems clear that you have to make some tradeoffs. You can get more intelligence out of your 10^11 neurons, but you will likely have to sacrifice some stability. You can also make tradeoffs between intelligence and speed, for example. But in the end, humans in general all have the same number of neurons, so in order to get more of one aspect of performance, you’re going to have to lose some other aspect.

When we start building minds bigger than ours, the question that occurs is, will they be more stable? Less? Will more neurons mean you can simultaneously have a IQ of 2000 (sorry, Holly!) and be rock solid, stable, and reliable? Or will it turn out that the further you delve into intelligence, the more the system tends to oscillate or otherwise show bad signs of feedback coupling?

Only time will tell. As the eternal paranoid optimist, my hope is that we will find that we can create a mind that can explain how to build a much better world – in words even a Trump supporter can understand. But my fear is that we’ll discover we can’t even build a trillion-neuron neural network that’s stable at all.

We also have to figure out how we’re going to treat our hypothetical trillion-neuron creation. Clearly it deserves the same rights as we have, but how do we compensate it for the miracles it can bring forth? What do we have to offer that it will want? And if we engineer a need into it so that it will want in order to have that need met, what moral position does that leave us in?