Archive for November, 2017

From a facebook discussion : free will

Thursday, November 23rd, 2017

Well, the problem I have with saying I have free will is multifold. A: I am not sure I exist. “I” as a single entity might well be a illusion since I appear to be a cooperating collection of subnets, and experiments like cutting the corpus callosum argue strongly that I am not a single ego, that this is a illusion. B: I am not sure, if I do exist, that I’m not deterministic. Experimenting with artificial neural networks, I note that they tend strongly towards the deterministic unless measures are taken to keep them from being deterministic. C: I am not sure, if I do exist and am not deterministic, that it is free agency and not a RNG or random noise that is guiding my actions. And yet, the idea that I am a person wandering around taking actions of my own free will is very compelling. Especially when I start discussing the matter which seems very meta

Trippy Hippie Meditation Music

Wednesday, November 15th, 2017

More movie/atmospheric stuff



Tuesday, November 14th, 2017

So, one of the things I’ve been learning about is ANNs. I’ve tried playing with several different frameworks and several different topologies, and one of the ones I’ve been playing with is Darknet.

I’ve been trying to train a Darknet RNN on a corpus generated from all the text in my blog. So far the results have been less than stellar – I think I need a bigger neural network than I’ve been using, and I think in order to do that I need a bigger GPU because I’m running out of patience. I was astonished to discover >1 teraflop GPUs are now in my price range, so I’ve ordered one.

I’m hoping soon to have simSheer available as a php endpoint that people can play with. All of this is building up to using Darknet for some other purposes, such as image recognition.

It’s interesting to think that even if simSheer manages to sound like me, it will be doing so with no sense of aboutness at all – well, I *think* it will be doing so with no sense of aboutness. It has no senses, and no other data to tie my writings in with, so I don’t think that any of the neurons in it can possibly be tagged with any real world meaning. Or can they? This is probably a subject that some famous philosopher has held forth on and I should probably go try and find their works and read them, but in the meantime it’s certainly fun to think about.

I really wonder to what extent the aboutness problem (borrowed from Stephenson’s Anathem) applies to NNNs. Would the cluster I have for the concept of love even remotely resemble the clusters other people have? What would the differences say about me and them?