Archive for the ‘The Big Picture’ Category

Software modeling of economic systems

Sunday, May 27th, 2018

So, while there’s much shouting back and forth and wrending of garments on the subject of whether collectivism is good or bad, whether the time has come for socialism, and how much damage being a member of the 1% does, I’m curious – we all have strong opinions, and obviously we’ve all got reasons for them, but has anyone done any software modeling on this?

It should be possible to, by looking at recorded data for the many hundreds of countries and thousands of industries and the like, create software models for various types of collectivism and capitalism and any other systems we’ve got records for, and determine what the best answer is. While right-wingers may feel firmly convinced that collectivist attempts are doomed, and certain aspects of the left that socialism will cure all our woes, I’m not really all that convinced that anyone who has never modelled the problem actually has any idea at all what will and won’t work.

Clearly capitalism comes with some advantages re: competition, but also clearly as we move into the age of automation we’re going to have to do UBI or *something* or we’ll have no jobs left and people will be forced to starve to death because they’re more expensive than machines. I guess one question that we should probably start with is, can we agree why we’re here? Can we at least agree it’s not to starve to death?

If we can, could we perhaps model some of these things? Maybe try to determine how much collectivism hurts initiative and innovation, figure out whether we could even successfully run as a collectivist system at all when measuring in real resource costs rather than in stupid-fiat-dollars?

I grant you that modelling this problem would not be a insignificant challenge – after all we’re not talking about the Glooper here – but I imagine we’d have a lot better luck with it than we would with modelling the weather, especially if we look at it as a problem in probabilistic behavior and determine based on pre-existing data what likely probabilities are.

Then again, it may be that some members on both sides are so firmly programmed that they couldn’t accept the output of a software modelling problem in the area of collectivism if it were run. I do often wonder, if we weren’t all being programmed by the left & right medias and our peer groups / bubbles in what exactly we’re supposed to think, what *would* we think? There have got to be *some* people who recognize the sheer folly of the idea that the other side must be completely and totally wrong. Yes, I’ve chosen my side, but I like to think the people on the other side are not idiots and I also like to think that one of these days we will stop shouting at each other and start devising some scientific methods to ascertain what the truth really is.

Of course the problem with this is there’s a chunk of people out there convinced that some God who has made no attempt in recent memory to get in touch with us and is quite possibly a work of fiction comprised by various unsavory elements of our past culture in a attempt to achieve some form of social control is actually completely in control and if science says something other than what they expect to see from God, then science must be wrong. Have faith even in the face of hard data, or you’re a bad $RELIGIOUS_NUT.

Ah, the human condition. Full of so many interesting miseries and contradictions.

What I’d do if I could

Sunday, June 18th, 2017

WARNING: This gets into some serious blue-sky territory

So, recently, I mentioned that I wouldn’t give power to certain conservatives who are in favor of criminalization of marijuana – and I think you all know I don’t smoke it but I’m a ally for those who do – and SS asked if I favored a America of exclusion.

Well, yes and no. I gave him a very short answer, which is that I favor a world where no one has any power over anyone else, but I thought I’d give the longer answer which is how I’d implement it if I were king.

I would load a hypervisor in everyone’s head, and network everyone together. Their bodies would be decoupled from their conscious experience. All physical possessions would be neural software – they would be able to have the same experience they’re having now, or wildly different experiences – a lot of experiences denied to all but a few would become open to everyone, such as the experience of being a rock star (simulated crowd unless you get *really* good at it and real people want to come see you, but I’d be into playing a simulated crowd, I’m not picky..)

A lot of experiences, like being in massive amounts of pain as your body fails, would go away. You’d have a interface for blocking people or locating new people you’d like to be in your life, for defining what you’d like your homes to look like and switching between them, for adding possessions – look at the video game The Sims, and you get a good idea of a lot of the interface you’d need. And you could fly with the blue angels, or be a rock star, or go mountain climbing, or drive in NASCAR, or whatever.

Now, at this point, “you” are a virtualized entity running under a hypervisor. Guess what this means – we can move you from body to body! You’d very likely be immortal as long as our society holds together. I’m assuming if Heaven (or $RELIGIOUS_UTOPIA) exists, this is part of it. I sometimes think we’re already in it and we’ve lost the instruction manual.

Anyway, you could be a despot or a fascist leader if you want – but, similar to being a rock star, you probably only get to have subjects if you’re good at it. Otherwise, it’s simSubjects for you. But I’d probably include code to allow you to forget that fact if you wanted to, so you could *think* you were ruling the free world. I’d also include ‘conditional virginity’ – (note that a lot of these are NOT my ideas, but the ideas of someone I talk to – $person’s future self, so to speak) so you could forget a experience you had temporarily so you could have it for the first time again.

Now, there are some serious challenges. We’d have to really master security in information systems, or we’d end up with people with all kinds of nasty virii loaded. (Well, we kind of have that situation now, don’t we ;-)). However, the advantages are pretty staggering. Among other things, a separate much smaller collection of neural code running under the hypervisor could do whatever body-care things needed to happen including farming, feeding, etc. In the meantime, you could eat a ten course meal if you wanted to and never gain a pound.

In addition, you could either choose to learn things ‘the hard way’ for the joy of the journey, or ‘matrix-style’ – many times I think you’d want to learn them the hard way when they were related to creating art, because that is the only way it would be “yours” and not just the group skill in playing the guitar or whatever. And some things like learning athletic skills the journey is part of the fun and not to be missed.

Anyway, learning how to write code for natural neural networks and get it to run correctly is a big ask. But that’s where I’d go with my utopia, Steve.

Inevitable neurological war, part duex

Tuesday, January 31st, 2017

So, I discussed in a earlier article a inevitable neurological war that I see set up entirely too often. You can find that article here if you’d like to review the bidding.

I submit to my audience that Christianity as I see it implemented on Earth, at least amongst a number of it’s adherents, sets up a similar inevitable neurological war. Subnets have to decide whether they’re going to submit to the idea that God is Love, and Love keeps no record of past wrongs, or submit to the idea that God is Justice, and will torture you for all eternity for the mistakes you make here. Both messages are contained within the same religion – along with a very nice bit of code to make it both viral, and not self-updating.

In other words, it’s malware. It sets up a neurological game of Go, very likely in order to make it easier for the Powers That Be to control us by limiting the amount of use of our 10^11 neurons we can make.

Now, I don’t deny that some people manage to transcend this feature of it. I don’t doubt they are the ones for whom the idea of God being Love is the important one, and them as have a broad and complex definition of Love. I wouldn’t deny that Love will occasionally deliver you a difficult lesson. I do continue to insist that the only way that Love would place you in hell for all eternity is if you A: asked for it and B: continued to ask for it, repeatedly, for all eternity, knowing that that is what you were asking for.

At this point, I’ve got my eyes out for neurological games of Go in general. I’ve come to suspect that the operating system loaded by entrainment into most humans has a very high suck factor and that A: we can do better and B: we should do better

So, one of the things I’m weeding out in my own mind is neurological games of Go that have no end and benefit no one.

As I’ve talked about, I’m pretty sure that you can experience amazing things – and quite desirable ones – if you get the *correct* neural operating system loaded on your minds.

How I handle people who love me more than I love them

Saturday, January 7th, 2017

I thought I’d talk about this, because it does happen. I haven’t yet experienced someone loving me who I don’t love at all, but I’ve had a couple of people who loved me – or wanted me – more than I wanted them.

I give them as much of my time as I can spare, and I tell them honestly that I don’t feel as strongly as they do. I avoid them only if they actively hurt me repeatedly – something that as far as I know has only come up once, and I think that the fault may have been mine. I think this is the winning answer.

Here’s why. If someone loves you, they are happier when they are around you. In addition, because of the same phenomenons that cause vibe to work (at concerts and raves), you are slightly happier. Therefore, it’s a net happiness win for the universe – and I choose to play on the side of happiness wins for the universe, because I feel like at least this corner of it has far too much fear and pain, not nearly enough joy and love.

Overloads

Thursday, January 5th, 2017

I’ve probably already talked about this, but I think one of the reasons that discussions about politics and religion often end in arguments is that English is not a good language for talking about such things.

It has some basic flaws – the biggest one, by far, is the overloads, Not as big, but also frustrating, is that there’s no great way to speak of relative certainty of a statement of truth without adding a lot of words.

The overloads thing is a serious problem. There are many, many neural symbols that map the word ‘God’, for example, and many, many that map the word ‘Love’. So the statement ‘God is Love’ can map out all sorts of ways in different people’s minds as far as what the actual meaning, in neural symbols – ultimately the most real post-linguistic definition you can have – in different minds. And ultimately, as my friend Tory reminded me repeatedly, you can end up with semantic arguments – which waste a lot of energy and do not move the ball down the field.

For those of you who are not programmers, a overload is when one function call can execute more than one set of code. In programming languages, overloads are type constrained – that is, you can only have one overload for String Foo(String Bar) – you could have a String Foo(Int Bar), but not a second String Foo(String Bar). English has no such constraints, nor does it have any easy way short of a lot of discussion – such as I often have with $future-person[0] – about *which* exact meaning for Love and God you have – to nail down exactly what is meant by what. Linguistically, overloads are just asking for trouble.

Politics, view horizons, and neural networks

Thursday, December 15th, 2016

So, one thing that has definitely come to light in recent days / weeks is that a lot of us are running around with fundamentally different views of reality at the moment. In some people’s worlds, Obama is a hero – in others, he’s a muslim terrorist or worse. What gives?

Well, part of what gives is the idea of view horizons – some people like to talk about this as ‘bubbles’, and perhaps that’s a more reasonable word, but I’d like to explore the idea from a slightly different angle briefly.

So, in a NNN, each neuron can only see information that it’s either directly connected to, or is connected to a relay source for. In the experiments involving cutting the corpus collossum, you can see this dramatically demonstrated when a placard containing instructions is placed in front of one eye of the subject and they follow the instructions on it, but when asked why they did so, they tell a story that’s completely unrelated to “Because you told me to”. The instruction on the placard is no longer on the view horizon – no longer routable via a reasonably short route – for the part of the subject’s mind that is in control of their voice.

Similarly, if you think of us as independent neurons in a very, very large neural network – with communications links like books, voice communication, the internet, etc taking the place of communication links like dendrites off of neurons – we can only know about what is on our view horizon. Most of us don’t have direct access to Obama to make up our minds based on personal interaction whether he’s a muslim terrorist, a superhero, or somewhere in between. However, we’re all connected to either clusters of other neurons – our friends – or a broadcast bus – the news – which steers our view at least somewhat.

Now, there’s a real possibility that both universes exist – we keep learning funny little things at the quantum level and it’s possible that there is both a universe where Obama is a muslim terrorist and one where he’s a superhero, and our experience here on Earth at the moment is at the confluence of two worldlines. However, it’s far more likely that what we’ve got are two teams of people, and each is spinning the story in the direction they believe is true – and because of confirmation bias, they’re drifting slowly further and further from reality.

Now, I’ve got news for you – no matter which side you’re on, it’s not likely you have a accurate view. Your view horizon is a long way from the original source, and being filtered through many, many minds in a game of telephone – and worse, those minds are influencing each other. But this opens up questions as to what exactly happens inside our own minds – we tend to think of ourselves as a single individual, a ego if you will, but there’s almost certainly a large fraction of our neurons that are ego-dissenting – these are what keeps the inhibit inputs on our neurons lit up and what keep us from becoming either narcissists or something worse, as well as what provides that all important critical judgement that we need when we, for example, want to create great works of art.

I am curious as to whether what we’re seeing in the political sphere is a similar thing on a macro level.

Neurological wealth

Thursday, November 24th, 2016

The most impressive – and disruptive – technology that we could possibly come up with would be neurological. If we could load software on our minds the way we do on computers, we could give the experience of unlimited wealth to all of us, for virtually no cost.

Now, there are some major problems with this. The security implications alone are terrifying – we already have enough problems with viral propagation of bad ideas via religion and just plain ol’ fashioned entrainment.

However, the win is equally huge. Let me give you a few examples.

First of all, whatever your ‘day job’ is, chances are it takes up a very small percentage of your total mental capacity. It would be possible for you to do whatever task helps keep this old ball spinning using background capacity, while never actually having the conscious experience of doing it.

Second of all, everything you experience in this world is made up of information. And there is no doubt that our 10^11 neurons are sufficient computing capacity to generate any experience you care to name out of whole cloth. Get them working in the right way and you can experience anything *anyone* can experience. The software to do this represents wealth of a very interesting kind. It can be copied indefinitely, without costing the creator anything. It can potentially add value to the experience of everyone who uses it. It would reduce our impact on the planet considerably – since we would no longer need physical ‘things’ for most of the adventures we might want to have.

Of course, there’s absolutely no proof that this hasn’t already happened, and that the controls of whatever network is responsible for rendering our experience of reality are just in the paws of someone who favors a less than utopic experience for everyone else. I think there are people who would enjoy the power that denying utopia to others represents.

Anyway, when I talk about giving everyone everything, I do think this is a reasonable approach to doing it. Yes, the hurdles are high – we haven’t even learned to build software that runs well on digital state machines, the idea of us writing software for our minds is a bit shiver inducing. But, the reward is even higher.

Given that everyone’s utopia is different, this is the only reasonable way I can see for us to give everyone a utopic experience at the same time.

Inevitable neurological war

Saturday, February 27th, 2016

This article is almost entirely conjecture. We sadly are not yet at a point where we can actually say exactly what is going on inside the human mind. Hopefully soon.

That said..

The way that we’re raised, and the society that we’re in, leads to a inevitable neurological war.

It’s built into us for physical touch to feel good. Depending on whether you’re wearing your evolution hat or your ID hat, this can either be the inevitable result of us needing to get very close to each other to reproduce or a design goal. (I have to say, building in things that feel good would certainly be a design goal if *I* was the designer)

On the other hand, it’s memetically built up – as far as I can tell, for very stupid and destructive reasons – for us to think that it’s wrong to be in love with more than one person, that it’s wrong to want to be involved in sexual contact below a certain age – in fact, I see some of my facebook friends encouraging the idea that trying to frighten the lovers of your female child is “protecting” her and a desirable thing to do. (In fact, teaching her about consent would seem to be a much healthier type of protection, but I digress).

Our mainstream religion – despite it not ever being clearly spelled out in the bible in the negative (the bible says that sexual love within a marriage is good, but does not actually state that sexual love outside a marriage is bad – that’s something we decided to tack on later) – teaches that if you ‘go too far’ before marriage, you’re a bad person – that sexual contact, despite feeling good, is a sin. It also teaches the idea that your lover is your property, that if someone else wants to experience sexual contact with them, they are breaking one of the “ten commandments” – even *thinking* about it is a crime against God.

Now, we all know what I think of Christianity. But another question is what do I think about what all this does to our minds? Well, by definition, it creates two sets of subnets that are always going to be in opposition. It’s wired in – on a deeper level than even any religion will ever be able to reach – that touch feels good, that petting and loving is *right*. It’s something that I personally find myself drawn to as a experience I want to have again and again. It’s what I want to dream about.

In the meantime, our parents try very hard to keep us from sexual contact – or even, in my case, nonsexual/cuddling contact that’s too prolonged. They program into us a subnet that says “this is sin, this is bad, this is wrong”. The idea that your virginity is something precious that you should give to your first and only lover also underlines this. This creates a subnet that says sex is bad, dirty, should be looked at with shame and guilt, isn’t something you should want, except in the situation of marriage – and probably not even then, if one reads the writings of the Victorians.

What happens when you have two subnets at war with each other? Well, first of all, you end up feeling the tension between them. Second of all, they eat capacity. Each one tries to claim a certain percentage of the neural Go board, and each tries to defeat the other.

So, I think some of this is jealousy.. our parents get attached to us, and don’t want to lose us to our lovers. Some of this is a amplifying effect of stupidity across the generations – one generation made something up, and then lied about it being the word of God. (If it was really the word of God, God would still be around and saying it. Probably in person. Certainly in some way that left no doubt to the fact that we were hearing from a deity). Some percentage of each successive generation after that was duped into believing they were hearing holy wisdom when in fact they were hearing damaging bull.

I don’t think that it’s immoral to love and be loved. Nor to express that love sexually if you’ve a mind to. I think that thinking of sex as shameful and wrong is a sign of a deeply broken set of memes. I think that people who think we should slut-shame are deeply confused about a whole lot of things, and are far more immoral than the sluts they would shame. I think it is a sign of how broken our culture is that we think that people who participate in a act that generally feels good and improves the attitude and mental health of both participants are immoral, while the people who seek to hurt those people for choosing to participate in something that feels good are given radio shows.

I also think that in general wars between subnets – beliefs that are diametrically opposed to observable reality tend to build these – are something we should try to remove from the meme pool, especially when it comes to things we pass on to our children. We are trimming their wings because our grandparents were afraid to fly.

Different utopias

Saturday, February 27th, 2016

So, one of the problems that I think we’re going to keep bumping up against here on Earth, at least in the USA where we ostensibly have a democratically elected set of people driving the boat, is that we all have different definitions of what winning means.

Like, I’d love to live in a world where we have sex with our friends, where automation does any job a human doesn’t care to, where we all try very hard to be excellent to each other. A world where no one conceives without having chosen to, where children are raised by all of us under the precept of being excellent to each other. Where education and mental health are based on a solid understanding of what’s happening on the iron of our minds – understanding based on science, on taking measurements and learning what’s really happening, rather than based on narrative and our storyteller nature, which clearly often is quite capable of diverging completely from what’s actually happening on the iron.

I’d love to live in a world where the video games are immersive, and so are the movies and the books – where we build each other up, where we help each other experience the things we want to experience.

I’d love to live in a world where no one was designated ‘less than’, where we have finally noticed the curve for history (blacks, gays, etc) and just started accepting that everyone is worthwhile and everyone matters.

I recognize that people should still have the option of suffering – that Hell still needs to exist, because that’s what some people are going to choose to experience. But I want to live in a world where no one is forced to suffer, either via their biology or via the actions of the group as a whole or mean-spirited individuals.

I for some reason doubt if my utopia is the same as the Christian one. If everyone who’s not religion X is going to be tortured for all eternity, I want out – not just that I want heaven, I want out of the system. I want a different deity. And I do not think I’m alone in this.

However, because my utopia and the utopia of, say, the religious right do not align, the goals we think are important to persue and the way we want to spend the resources in the public pool are going to be radically different. Putting both my people and their people in a box and trying to come to some agreement politically about what we should be doing is likely to be problematic. And I don’t think they should be denied their utopia, except where to do so would infringe on my rights to be free and loved and happy and complete.

I wonder how many different views of what a utopic experience might look like there are? I also wonder why some people need other people to be hurt as part of their utopia. I’m starting to think that might be one of the attributes commonly found in what we somewhat tropishly refer to as evil.

I do wonder what’s happening inside my neural net vs what’s happening inside the neural nets of those who fit in the mold I just described. There’s got to be something fundamentally different going on, and I don’t know what to make of it.

Rights for electronic life

Saturday, January 30th, 2016

So, recently I ran across this.

My first reaction was, holy shmoo, the singularity is almost here!

Actually, there’s all kinds of interesting problems here. I’ve talked with a number of my friends about the question of whether, if we created a accurate software model of a human, it would exhibit free will. It’s a really interesting question – if the answer is yes, that’s a serious blow to theology but a major boost to the rest of us.

But there’s a natural side question which comes up – which is, supposing we can get the neuron count up from a million to a billion per chip. If moore’s law were to hold, this would take – let’s see, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024 = 11 18-month cycles. At that point, making a 100-billion neuron mind out of the chips becomes practical. Said creature has as many neurons as we do – but is it a person?

My guess is, legally, initially, no. In fact, we’ll probably see all sorts of awful behavior as we debug, including repeatedly murdering the poor thing (turning off the power, over and over).

We may even see them turned into slaves, although I really hope we’re beyond that by now. I don’t mind enslaving small neural nets that will never show free will or understand suffering, or enslaving turing machines which are incapable of a original thought, but the idea of enslaving something that’s as capable as we are is disturbing.

At some point, however, we’ll have to acknowledge that a person’s a person, no matter what they’re made of. I see signs we’re moving in this direction with India granting personhood to dolphins (about bloody time!) and I have hopes to someday see it granted to any individual who can pass the mirror test. (If you know you’re a person, then you are)

It does remind me of “Jerry was a man”. It’s a question we’ll have to wrestle with – I hope we haven’t gotten so locked into the idea that electrons just do what we tell them to with turing machines (where that’s true) that we can’t realize that if we build a sufficiently large neural network out of transistors, it has the same rights that we do – in fact, ‘birthing’ might be a better phrase than ‘building’ here, since we are undoubtedly creating a new life form.

There’s all sorts of interesting corollaries to this as well. If we succeed in building something self-aware out of transistors, our race will be experiencing first contact. Granted, we’ll have *built* ET instead of met him out there in the sky, but that doesn’t change the fact that it is first contact. A life form made out of silicon is likely to be *different* – have different values, enjoy different things. This has been explored quite a bit in science fiction, but it was completely news to me that I was going to see it in my lifetime (assuming the actuarial tables describe me) as science fact.

If we build something 100 billion neurons in size and it’s *not* self-aware, this also has interesting implications – it asks the question “Where is the magic coming from?”. This outcome would also be incredibly cool, and lead us off in another, equally interesting set of adventures.

There’s also the question of the singularity – what happens when we build something with 200 billion neurons? There’s another article I keep meaning to write about intelligence and stability, but one interesting thing I would note is that plus or minus a few percent, all humans have the same 100 billion neurons, therefore increased intelligence or performance in our minds comes from changing the way we connect them. It’s possible that a larger neural net won’t be more intelligent at all – or that it will be completely unstable – or that it will be much, much, *much* more intelligent. All of us are going to be curious about what it has to say, in the latter case, and in any case we’re going to learn a lot of interesting things.

However, I do think we should all sit down and talk about the ethical issues *before* we build something that should have legal rights. I think we probably will – this has been addressed in numerous forums so it’s undoubtedly something people are aware of. One of my favorite Star Trek themes, addressed numerous times in TNG.