Archive for December, 2016

Politics, view horizons, and neural networks

Thursday, December 15th, 2016

So, one thing that has definitely come to light in recent days / weeks is that a lot of us are running around with fundamentally different views of reality at the moment. In some people’s worlds, Obama is a hero – in others, he’s a muslim terrorist or worse. What gives?

Well, part of what gives is the idea of view horizons – some people like to talk about this as ‘bubbles’, and perhaps that’s a more reasonable word, but I’d like to explore the idea from a slightly different angle briefly.

So, in a NNN, each neuron can only see information that it’s either directly connected to, or is connected to a relay source for. In the experiments involving cutting the corpus collossum, you can see this dramatically demonstrated when a placard containing instructions is placed in front of one eye of the subject and they follow the instructions on it, but when asked why they did so, they tell a story that’s completely unrelated to “Because you told me to”. The instruction on the placard is no longer on the view horizon – no longer routable via a reasonably short route – for the part of the subject’s mind that is in control of their voice.

Similarly, if you think of us as independent neurons in a very, very large neural network – with communications links like books, voice communication, the internet, etc taking the place of communication links like dendrites off of neurons – we can only know about what is on our view horizon. Most of us don’t have direct access to Obama to make up our minds based on personal interaction whether he’s a muslim terrorist, a superhero, or somewhere in between. However, we’re all connected to either clusters of other neurons – our friends – or a broadcast bus – the news – which steers our view at least somewhat.

Now, there’s a real possibility that both universes exist – we keep learning funny little things at the quantum level and it’s possible that there is both a universe where Obama is a muslim terrorist and one where he’s a superhero, and our experience here on Earth at the moment is at the confluence of two worldlines. However, it’s far more likely that what we’ve got are two teams of people, and each is spinning the story in the direction they believe is true – and because of confirmation bias, they’re drifting slowly further and further from reality.

Now, I’ve got news for you – no matter which side you’re on, it’s not likely you have a accurate view. Your view horizon is a long way from the original source, and being filtered through many, many minds in a game of telephone – and worse, those minds are influencing each other. But this opens up questions as to what exactly happens inside our own minds – we tend to think of ourselves as a single individual, a ego if you will, but there’s almost certainly a large fraction of our neurons that are ego-dissenting – these are what keeps the inhibit inputs on our neurons lit up and what keep us from becoming either narcissists or something worse, as well as what provides that all important critical judgement that we need when we, for example, want to create great works of art.

I am curious as to whether what we’re seeing in the political sphere is a similar thing on a macro level.

City Of New Orleans (w/ Jefferson Jay)

Wednesday, December 14th, 2016

Again, Jefferson Jay has very kindly allowed me to play protools tennis with one of his tracks.. this one is a favorite of mine, and one I’ve covered on my own a number of times. I think you all will enjoy how mellow this came out..

City Of New Orleans

Possible song idea

Saturday, December 10th, 2016

(to be sung in a country/western style)

So I hear about how you’re banning protests on the Mall
Trump, did you read the first amendment at all?
It’s a big complex system, and I think it’s understood
That your approach to changing it doesn’t look too good

(chorus)
It takes a gentle hand on the controls
No sudden movements, no wild rolls
America’s a big airplane, moving across the sky
And it takes a gentle hand if we’re not all gonna die

I hear about how you’re tearing down the E.P.A.
Maybe you might want to think about what the other side has to say?
Millions of people have struggled to get us where we are
And if you only focus on profit we’re not going to get too far

I know you think you know it all, but I think we all agree
The smartest man in the room knows you only know what you can see
You’re acting like you’re Maverick in your fighter jet agleam
But you’re flying a 747 heavy, so please get on our team

Please understand that I want you to succeed
But first I think you need to understand the concept of meta-greed
The best kind of greed wants everything for us all
If you persist in wanting it only for yourself, that airplane’s gonna stall

Links to explore when I get some time

Saturday, December 10th, 2016

http://www.informationphilosopher.com/freedom/mechanisms.html

https://youarenotsosmart.com/2016/12/02/yanss-090-questioning-the-nature-of-reality-with-cognitive-scientist-donald-hoffman/

Fun discussion about ANNs on facebook

Saturday, December 10th, 2016

Jonathan Sheer Pullen: Curious why you say that? If you extrapolate 15 years out on Darpa Synapse and it follows Moore’s law, we’re there.

GA: Jonathan Sheer Pullen we’re not even a little bit close.

Here’s a brief synopsis.

(1) any intelligence must be unconstrained in what it can think

(2) any intelligence must be free to choose and pursue its thoughts

(3) any intelligence must be capable of deception

(4) an intelligence is presumable conscious

So we have a major problem, because such an entity would be free to choose whether to cooperate and it would be capable of deception. It would be aware that it is not human and therefore may pursue its own interests as a machine.

So it would be strange to imagine that such an intellect would be motivated to work on problems we humans think are important. There’s little incentive to do so.

Then there’s the major problem of verifying that a machine might be more intelligent than humans. Such a system is impossible to test and coupled with the ability to lie, it’s a non-starter.

We will not build a messiah.

Jonathan Sheer Pullen: You up to have some fun kicking this one around a little more?

Jonathan Sheer Pullen: Any neural network has to have definitions of success and failure in entrainment. This enables us to do things like giving our intelligence a powerful desire for, for example, human artwork. This might not be the most moral thing ever, but it is something we could do. This gives us something to trade with it – offering us the possibility of befriending it.

Jonathan Sheer Pullen: As far as knowing whether it’s smarter than human – well, I’m of the opinion that if you have something with more neurons than human, and you entrain it with a bunch o’ data, it’s going to be smarter. But I think we’ll know just by talking to it.

GA: there are ethical boundaries that humans will find difficult if not impossible to cross.
GA: you won’t be able to distinguish genius from madness or deception.
GA: this has already been shown by the time it took to verify the proof for Poincare’s Conjecture, and that was simply another human. It took 5 years to confirm the proof.

Jonathan Sheer Pullen: Well, we have that problem with humans, too. My best guess, though, is that we *will*. Consider the induction motor. Not one in a hundred million of us could have come up with the idea – but once it’s been come up with, it’s obvious to most of us how it works and that it’s brilliant. I think that truth tends to ring true – to quote HHH from Pump Up The Volume, the truth is a virus – or rather, it tends to be viral.

GA: it isn’t a matter of truth, it’s a matter of trust for which you have no basis.

Jonathan Sheer Pullen: Well, that’s a case of trust, but verify. And to be sure, building something smarter than we are is a risk – it’s a pandora’s box. But my experience with humans suggests we *like* opening pandora’s box.

GA: really. It’s like trying to build a chess playing computer when don’t know how to play.

Jonathan Sheer Pullen: GA, I don’t really see it that way. NNNs naturally evolve towards whatever problems you throw at them – I don’t see any reason to think ANNs would be different. It is true that we’re still learning about how to best utilize ANNs, topologically, but I feel comfortable that by the time we can make a ANN that big, we will also know what to wire to what, and what to use as attractors

GA: In any case, all this presupposes that a machine intelligence is even interested in human problems. That in itself would be suspicious because any entity would be maladapted occur placed another species above its own interest.

Jonathan Sheer Pullen: It’s not a problem we have any real data for. We’ve never had control over the attractors for a intelligence before, unless you want to think about things like the experiments in the 50s with embedding wires in the pleasure centers of mental patients.

Jonathan Sheer Pullen: We do know we’re able to do things like facial recognition and word recognition by using control of attractors in smaller ANNs

GA: I disagree. You’re assuming you know the outcome. I’m not arguing about whether you can build something. I’m talking about what it is after you build it and it isn’t what you expected.

Jonathan Sheer Pullen: I don’t know the outcome. I’d just like to find out. I hardly think it’s going to get out of it’s box and turn into skynet. My concerns are more that this would turn into another ugly form of slavery. If you’re a ST:TNG fan, “The Measure Of A Man” discusses this topic nicely.

GA: the outcome I’m referring to is when a system is built. Play chess? Trivial. We know the answer.

Jonathan Sheer Pullen: I’m more thinking with a larger neural surface area, it might be able to see patterns in world economies and governments, suggest better designs. And if it also stood to gain from any improvements that were made..

GA: things like facial recognition are flawed concepts and presumes human direction of activities. Why should an intelligent machine care?

GA: that’s the ethical problem. Who is going to pursue a solution when a machine may tell you to let a million people die?

Jonathan Sheer Pullen: Again, I refer you to the idea that we would be controlling it’s attractors. If you have a ANN, and you control it’s attractors, you control what it’s trying to achieve. You define success – when neurons fire together and wire together.

Jonathan Sheer Pullen: In humans, this is largely controlled by things like the entrainment period in which facial recognition of the parents (among other things) makes the child NNN know when it’s succeeding. Over time it gets more complex.

GA: if you exercise control then you cannot have true intelligence. That’s one of my points.

Control constrains options, including possible solutions.

Jonathan Sheer Pullen: Eventually it would likely evolve to control it’s own attractors, just as a child turns to a adult and starts controlling their own definition of success and failure.

Jonathan Sheer Pullen: (basically, we’d wire the attractors into one of the outputs of the network)

GA: exactly, which is why it would be insane to turn over control to an intelligent machine.

Jonathan Sheer Pullen: But.. just as a child will retain a lot of the lessons of the parents, it would still retain the lessons of whatever goals we fired attractors on early in the process.

Jonathan Sheer Pullen: Control of it’s attractors? Not at all. I’m not saying we’ll wire it to the internet and encourage it to hack the planet. It would just be a person at that point, just a different type of person than we are – just like a dog is a four-footed person, but a different type of person than we are.

GA: what goals? Human goals? That would be like your parents raising you like a dog and you thinking you’d be content with that.

Jonathan Sheer Pullen: Eventually it would likely trasncend human goals.. therein lies the risk. You’d also almost certainly have to make several of them and let them communicate with each other from ‘childhood’, or they might be very lonely.

GA: if that’s the case; just another person, then what’s the point? We already have many choices that we don’t want to listen to. It would be a big disappointment of all this work simply produced another opinion to ignore.

Jonathan Sheer Pullen: Well, see above, I think that it would say things that were obviously profoundly true. I think in any case it’s worth finding out.

GA: if you expect them to be lonely, then you’ve already considered that they might not be interested in helping us at all

Jonathan Sheer Pullen: Of course. They might not be beyond their ‘childhood’ when we control their attractors. We don’t know. It’s not a situation that has come up, as far as I know.

GA: you wouldn’t know of it was true without a comparable level of intelligence. It could just be gibberish intended to fool you.

Jonathan Sheer Pullen: Let’s go back to my point about the induction motor.

Jonathan Sheer Pullen: Do you know how a induction motor works?

GA: why would that have anything to do with an intelligent entity?

Jonathan Sheer Pullen: The point I was making is that anyone can understand how it works once they’ve seen it, but it takes a Tesla to see it for the first time.

Jonathan Sheer Pullen: And I think you’d find similar things with our hypothetical superhuman ANN intelligence.

GA: again, you’re assuming that it cares

Jonathan Sheer Pullen: Well, in it’s childhood, we know it will, because we’re controlling the attractors.

Jonathan Sheer Pullen: Also.. and this goes *way* off into immorality.. it’s going to be utterly dependant on us, because it runs on electricity! 😉

Jonathan Sheer Pullen: Which once it understands that may lead to it going completely crazy.. what Heinlien suggested in Friday.. or it may lead to it wanting to get along with us

Jonathan Sheer Pullen: And I suspect the defining factor will be how good it’s conscious experience is.. how much it likes it’s life.

GA: now you have the recipe for a pissed-off intelligence.

Jonathan Sheer Pullen: Or a grateful one.

Jonathan Sheer Pullen: It depends on whether it’s having a good ride or a bad one, so to speak.

GA: and if it’s bad?

Jonathan Sheer Pullen: Along the way to building a superhuman intelligence, we’ll start with something like a cat-sized intelligence – that in fact is Synapse’s official goal.

Jonathan Sheer Pullen: And along the way we’ll learn how to make the experience of the ANN good

GA: I don’t accept that argument. The people working on it don’t even know how to make their own lives good.

GA: Anyway, I do have to run. Got some horses to feed 🙂

Jonathan Sheer Pullen: Good rag chew.. thanks for playing along

Charity boycott..

Thursday, December 8th, 2016

I wonder if we could get a sizable group of people together to agree to boycott any charity that sends out paper mailings asking for more money.

Basically, it’s close to impossible to donate anonymously to most of the big charities, and it seems like a lot of them use your donation to spam you asking you for more money instead of using it to actually fix whatever their charity is supposed to be fixing. There should be a option to donate without ever getting harassed for more money – or at least get harassed only via electronic means.

Back to thinking the 1% aren’t the problem

Monday, December 5th, 2016

So, after digging more into the system as it currently sits, I’m back to thinking the 1% are not the problem. Why? Because the value of a dollar is a scalable, and because the things they are doing are generally not slowing the velocity of money any. My current guess is that the american corporate model is the problem, but that’s subject to change after I research some more.

By the way: it is a big, complex system. Just the type of thing I like to sink my teeth into. I do not think this is a lifelong enthusiasm – more likely I’ll spend a year on learning about it and move on. (I’ve learned that my life has things that I’m interested in for life, and things that I’m interested in until I learn enough about them or experience enough of them to get my fill.)

One thing I would like to clarify is I’m pretty sure whatever the problem is, it’s benefiting nobody. I’m guessing it’s going to be one of those things like Tesla’s induction motor.. obvious once you see it, but it takes a special kind of bending your mind to see.

I still can’t get over my hunch that whatever the problem is, judicious use of powerful computers and fast databases might be part of the solution.

Hope And Despair

Monday, December 5th, 2016

Little bit of trance for my fans out there. This will probably get another makeover with some more layers of lead instruments and a few samples or some spoken word.

This was actually just a ‘getting my mojo back’ track – I spent three weeks in CA, not playing at all, and was a little rusty.

Hope And Despair

learning

Sunday, December 4th, 2016

So, for those of you who hadn’t already guessed, there’s no way I’d try to implement any of my economic theories yet. I’m still learning about the subject. (And thanks for all of you who have suggested books, papers, and provided summaries and necessary bits of data.. special shouts out to Steve and James who have both been extra helpful.)

Most of what I’m learning is that I still don’t have the whole picture in my head. For example, one of the criticisms in my mind of the stock market was completely flawed, because I was forgetting that when someone sells stock, the buyer gets the proceeds – the money doesn’t just disappear, it changes hands. It is not true that money sunk in the stock market is not out there in the world making good things happen – because the money doesn’t stop moving when stock is purchased, it goes into the hands of the seller who then goes and does something else with it. (We hope). Even money in a savings account doesn’t necessarily stop moving, because new loans are made against it.

But the question remains – we’re working more efficiently than we ever have, many of us are working harder than ever – two incomes instead of one – and yet many of us are still broke or concave. Why? Is it the additional cost of all of us having instant network access? Is the internet itself that expensive? Is it the increased cost of health care and the fact that a somewhat more toxic world leads to more health complications? Is it the increased cost of education? Or is it the increased cost of interest, and better and more effective advertising that encourages many of us to spend beyond our means? Energy is more or less the same price it’s always been if you adjust for inflation, so it’s not that.

I’d love to hear some thoughts on this.

It’s interesting wrestling with this – I, for now, am doing just fine – but many of my friends seem to be struggling a lot.

Fundamental issues with thinking about money

Thursday, December 1st, 2016

So, I was discussing how large the outstanding money supply is with a friend of mine. His best estimate is there is $20T outstanding, or just about $65k per person currently here.

Now, I don’t know about you, but I see this as likely to be a problem.

We literally do not have enough money to buy even a fraction of the stuff here. If we decided, oh, we want to buy everything that’s in the USA at once, the system would crash, spectacularly. We don’t even have a fraction of the money that we have tangible resource of value.

Now, I understand that most of you take exception to my assertion that the only sane way to think about money in our current system is as being backed by all the real value in the system. It’s clearly not backed by nothing. The value may be propped up by the fact that certain commodities are traded in it, but it’s also clearly not backed by oil.

Backing a currency with a depleting finite resource is exactly the mess we were trying to get out of when we abandoned the gold standard – although, it would appear, in many people’s minds, we abandoned it for the debt standard, which is more than a little nutty. Only 5 countries in the world are not currently in debt to someone – this suggests that loaning money into existence has gotten quite popular – except that I don’t actually think that’s what we’re doing. I think some bean counters have gone ’round the bend. We’re creating real, tangible value – both with intellectual property and scientific discoveries, and with the work we put into building and upgrading physical plant and infrastructure all around the world – as well as, painful as it is to admit, the physical natural resources of the world itself, some renewable, some not.

Anyway, the truth is, if you put on your ‘sane person’ glasses for a minute, that clearly the money is backed by everything you can buy with it. If we could get people to grok this, maybe we could put some more in circulation without people treating it as inflationary. We wouldn’t need to put more in circulation, I should mention, if it wasn’t for the impressive levels of stupidity of the 1%.

We need to give these people something else to use to keep score with, because the money pool – already too small – is spending entirely too much time in their hands. Not only that, ‘interest’ in the financial world does not match what’s going on in the real world. Any time your paper bookkeeping system is out of whack with what’s really happening out there in the world, you’re going to get into trouble. I’ve been avoiding talking about interest for a while now, because I’m still turning over my thoughts about how I would handle it, but I think it is a dangerous thing to do to ask for as much of it as the lenders currently do, because they’re warping the paper tracking system vs. reality. The money is backed by real goods, but while we do have more real goods every day, we do *not* have 27% more of them a year, or probably even 4% of them. I get the temptation to cheat in order to enrich yourself or your corporation, banks, but are you sure you want to court a system crash and play musical chairs with who gets the tangibles when everything comes undone here?

Anyway, back to my assertion that the 1% are being stupid. Every dollar you keep in your bank account beyond your personal needs is a dollar that isn’t out there in the world doing something. In a cash-starved money-based RAS, money that isn’t in motion is useless, worthless. The more money you have in motion, changing hands, facilitating creation and growth and living and the like, the better the quality of life for everyone – including you, none-too-bright 1-percenters, because part of what that money powers is the discovery of intellectual property – which is something you can not buy just by deciding to buy it. Genius is where you find it, and you have no way of knowing which of the many many people around you (some of whom might be starving on the streets) are the genuisi. I encourage you to read about the end of Tesla, and consider that if a few more dollars had come his way, he might have not died when he did, and he might have gone on to create even more cool things that we’d all be using today.

If you read and understood my earlier article on Neurological Wealth, you know that the intellectual property discovered could ultimately be far, far beyond anything that you can currently imagine. There’s good reason to encourage people to get out there and create. This is wealth you can not buy today.. you have to let it grow and accrue naturally, and you’re stifling it so you can *keep score*.

I realize it’s vanishingly unlikely that any of you 1% types read this, and even more unlikely that if you did, you’d understand it. I’ve come to accept that there’s a very small number of people on the globe with both the intelligence and the experience to even understand this discussion, and the odds of very many of them finding their way to my blog are pretty tiny. Nonetheless, I will continue this intellectual exercise, for myself if for no one else.