If anyone wondered.. my three wishes

January 5th, 2017

1) My RL friendship with $person[0] back
2) Music career
3) Everyone else getting what they want

Overloads

January 5th, 2017

I’ve probably already talked about this, but I think one of the reasons that discussions about politics and religion often end in arguments is that English is not a good language for talking about such things.

It has some basic flaws – the biggest one, by far, is the overloads, Not as big, but also frustrating, is that there’s no great way to speak of relative certainty of a statement of truth without adding a lot of words.

The overloads thing is a serious problem. There are many, many neural symbols that map the word ‘God’, for example, and many, many that map the word ‘Love’. So the statement ‘God is Love’ can map out all sorts of ways in different people’s minds as far as what the actual meaning, in neural symbols – ultimately the most real post-linguistic definition you can have – in different minds. And ultimately, as my friend Tory reminded me repeatedly, you can end up with semantic arguments – which waste a lot of energy and do not move the ball down the field.

For those of you who are not programmers, a overload is when one function call can execute more than one set of code. In programming languages, overloads are type constrained – that is, you can only have one overload for String Foo(String Bar) – you could have a String Foo(Int Bar), but not a second String Foo(String Bar). English has no such constraints, nor does it have any easy way short of a lot of discussion – such as I often have with $future-person[0] – about *which* exact meaning for Love and God you have – to nail down exactly what is meant by what. Linguistically, overloads are just asking for trouble.

Quote

January 5th, 2017

If our experience is all made of information, then experiencing it is all about communication.

–Anonymous

Western Science

January 5th, 2017

One of the problems I keep thinking about is that western science has one major flaw.

They don’t know what they’re measuring *with*. Until you know the answer to that question, you don’t know what you’re measuring. We don’t yet understand what we are – at least, if the hard problem of consciousness has been solved, no one has told me the good news. I’ve heard a lot of theories, but I haven’t heard one I’d call solid enough to call plausible yet.

In other words, dear scientists, please bump the priority on neuroscience and both ANN and NNN research. Dear warmongers, please stop wasting money blowing shit up until we can solve this more important problem. Kthx, Sheer.

Mania

January 5th, 2017

So, I have this problem. It’s a persistent one, and it’s likely to continue being a persistent one for the forseable future.

During certain periods in my internal cycle, if I open the throttles on my mind and give it something entertaining to chew on, like recording a album, dancing, or thinking about life, there’s no rev limiter.

It spins up, faster and faster, until eventually it starts to wobble and shakes itself into a shutdown condition. Next thing I know, I’m somewhere where the doors don’t open. Generally I get sprung fairly quickly, generally no one has been actually hurt although there is sometimes some property damage, usually caused by the cops spike stripping me.

I’ve learned to avoid driving while doing it. Safest that way. However, even when I do it in my own house, people come and tell me that I don’t have atonomy over my own body, that even though I’m threatening no one and I’m eating and drinking, I’m not permitted to do this.

Many of my friends think that this activity is seriously unhappy-making, and undesirable, and it’s only a matter of time before I kill myself or someone else.

Here’s why it’s challenging: every time, from my perspective, it’s a win.

Every time, I have more mental capacity, more flexibility, more mental power and capability. This isn’t illusory – I can often measure it very real-world ways. Things I couldn’t do before the ramp up that I can do afterwords. And I suspect that it is one path to developing http://www.sheer.us/weblogs/?p=3211. I’ve learned not to try to contact $person[0], although apparently I haven’t mastered yet not contacting $person[1]. So I need to improve the software so that it keeps me from contacting CLASS($person[]). Which I will make a honest attempt at. (I don’t stop missing these people ever. I don’t think it’s likely that I ever will. But, you want to remove me from your life, I figure that’s your right. Just forgive me if I want to build the ability to dream about you anyway.

But.. even if I remove that possibility, it’s clear that I’m growing whenever I climb the linear mental accelerator that no-sleep during a approach window represents.

At this point, I’m thinking I should plan these. My body seems to like every six months for them – I think I should take vacation time, I should have my lawyer on call to block any attempt to commit me that isn’t as bona fide as it comes, and I should just really embrace this as this is how I choose to be. Slowly my friends are coming to see my point of view. I think increasingly they’re starting to see that my life is not giving me what I need, and that it’s not reasonable to expect me to sit here with one engine out and the other at idle when I was made to fly.

I wish more people would join me. I’ve got reasons to think others have done this before me.. it’s all over the music of Owl City, for example, and hinted at in U2 and sometimes VNV Nation.

Every time, the experience with the linear accelerator convinces me I should take another ride. And I wonder, to what extent are people telling me not to do it because they’re afraid to do it themselves? How many of the experts that tell me how wrong and dangerous this is have done it themselves?

One possibility that I’m considering strongly is that I’m not actually at the edge of my mind, and that I’m supposed to be. That the people I see in my ordinary reality are reflected light from the real people that are out there filtered through many, many layers – too many layers – of neural filters built out of my persistent and irrational fears. I can’t tell what anyone else’s conscious experience is, and as far as I can tell, no one else can tell what mine is, although I encourage you, if you have the technology to read my mind, please do so. If you can help me reconnect with the people I can’t handle losing, please do so.

$person[0], I wonder if you read this blog, a lot a lot. I will admit I find it likely that you do, or that you have a friend reading it for you to watch for certain things. Wish I knew what they were. If so, I can’t say so in cleartext most of the time, but I need your help. A abuser destroyed part of my mind, and I’m just guessing at what happened with little but static and noise to go on. Apparently your friendship was something that part of me rested on, and while I accept the loss because I must, it never stops hurting and I can’t find any way to make it stop. I told you if you told me your lines I would respect them, but my fear is your lines are never and nowhere, and I also fear this may be because you believe things about me that just are not true, and the only part of me fearless enough to even try to approach you is the part of me that is the least representative of my ability to be a normal, contained individual. Please believe that the person you met IRL the first time I came to visit you this century is representative of who I am in person. But I can’t do that in email, especially not when I’m in ‘trust and send’ mode, which I can only really enter with you, for reasons that will become apparent when we talk, if they haven’t already.

$person[1], I don’t even know what I said to make you so angry. I have zero memory of it, it happened in a blackout from my perspective. I doubt you’re reading my blog, as I have to accept I probably don’t matter that much to you. So be it, but I wish we were still friends.

Politics, view horizons, and neural networks

December 15th, 2016

So, one thing that has definitely come to light in recent days / weeks is that a lot of us are running around with fundamentally different views of reality at the moment. In some people’s worlds, Obama is a hero – in others, he’s a muslim terrorist or worse. What gives?

Well, part of what gives is the idea of view horizons – some people like to talk about this as ‘bubbles’, and perhaps that’s a more reasonable word, but I’d like to explore the idea from a slightly different angle briefly.

So, in a NNN, each neuron can only see information that it’s either directly connected to, or is connected to a relay source for. In the experiments involving cutting the corpus collossum, you can see this dramatically demonstrated when a placard containing instructions is placed in front of one eye of the subject and they follow the instructions on it, but when asked why they did so, they tell a story that’s completely unrelated to “Because you told me to”. The instruction on the placard is no longer on the view horizon – no longer routable via a reasonably short route – for the part of the subject’s mind that is in control of their voice.

Similarly, if you think of us as independent neurons in a very, very large neural network – with communications links like books, voice communication, the internet, etc taking the place of communication links like dendrites off of neurons – we can only know about what is on our view horizon. Most of us don’t have direct access to Obama to make up our minds based on personal interaction whether he’s a muslim terrorist, a superhero, or somewhere in between. However, we’re all connected to either clusters of other neurons – our friends – or a broadcast bus – the news – which steers our view at least somewhat.

Now, there’s a real possibility that both universes exist – we keep learning funny little things at the quantum level and it’s possible that there is both a universe where Obama is a muslim terrorist and one where he’s a superhero, and our experience here on Earth at the moment is at the confluence of two worldlines. However, it’s far more likely that what we’ve got are two teams of people, and each is spinning the story in the direction they believe is true – and because of confirmation bias, they’re drifting slowly further and further from reality.

Now, I’ve got news for you – no matter which side you’re on, it’s not likely you have a accurate view. Your view horizon is a long way from the original source, and being filtered through many, many minds in a game of telephone – and worse, those minds are influencing each other. But this opens up questions as to what exactly happens inside our own minds – we tend to think of ourselves as a single individual, a ego if you will, but there’s almost certainly a large fraction of our neurons that are ego-dissenting – these are what keeps the inhibit inputs on our neurons lit up and what keep us from becoming either narcissists or something worse, as well as what provides that all important critical judgement that we need when we, for example, want to create great works of art.

I am curious as to whether what we’re seeing in the political sphere is a similar thing on a macro level.

City Of New Orleans (w/ Jefferson Jay)

December 14th, 2016

Again, Jefferson Jay has very kindly allowed me to play protools tennis with one of his tracks.. this one is a favorite of mine, and one I’ve covered on my own a number of times. I think you all will enjoy how mellow this came out..

City Of New Orleans

Possible song idea

December 10th, 2016

(to be sung in a country/western style)

So I hear about how you’re banning protests on the Mall
Trump, did you read the first amendment at all?
It’s a big complex system, and I think it’s understood
That your approach to changing it doesn’t look too good

(chorus)
It takes a gentle hand on the controls
No sudden movements, no wild rolls
America’s a big airplane, moving across the sky
And it takes a gentle hand if we’re not all gonna die

I hear about how you’re tearing down the E.P.A.
Maybe you might want to think about what the other side has to say?
Millions of people have struggled to get us where we are
And if you only focus on profit we’re not going to get too far

I know you think you know it all, but I think we all agree
The smartest man in the room knows you only know what you can see
You’re acting like you’re Maverick in your fighter jet agleam
But you’re flying a 747 heavy, so please get on our team

Please understand that I want you to succeed
But first I think you need to understand the concept of meta-greed
The best kind of greed wants everything for us all
If you persist in wanting it only for yourself, that airplane’s gonna stall

Links to explore when I get some time

December 10th, 2016

http://www.informationphilosopher.com/freedom/mechanisms.html

https://youarenotsosmart.com/2016/12/02/yanss-090-questioning-the-nature-of-reality-with-cognitive-scientist-donald-hoffman/

Fun discussion about ANNs on facebook

December 10th, 2016

Jonathan Sheer Pullen: Curious why you say that? If you extrapolate 15 years out on Darpa Synapse and it follows Moore’s law, we’re there.

GA: Jonathan Sheer Pullen we’re not even a little bit close.

Here’s a brief synopsis.

(1) any intelligence must be unconstrained in what it can think

(2) any intelligence must be free to choose and pursue its thoughts

(3) any intelligence must be capable of deception

(4) an intelligence is presumable conscious

So we have a major problem, because such an entity would be free to choose whether to cooperate and it would be capable of deception. It would be aware that it is not human and therefore may pursue its own interests as a machine.

So it would be strange to imagine that such an intellect would be motivated to work on problems we humans think are important. There’s little incentive to do so.

Then there’s the major problem of verifying that a machine might be more intelligent than humans. Such a system is impossible to test and coupled with the ability to lie, it’s a non-starter.

We will not build a messiah.

Jonathan Sheer Pullen: You up to have some fun kicking this one around a little more?

Jonathan Sheer Pullen: Any neural network has to have definitions of success and failure in entrainment. This enables us to do things like giving our intelligence a powerful desire for, for example, human artwork. This might not be the most moral thing ever, but it is something we could do. This gives us something to trade with it – offering us the possibility of befriending it.

Jonathan Sheer Pullen: As far as knowing whether it’s smarter than human – well, I’m of the opinion that if you have something with more neurons than human, and you entrain it with a bunch o’ data, it’s going to be smarter. But I think we’ll know just by talking to it.

GA: there are ethical boundaries that humans will find difficult if not impossible to cross.
GA: you won’t be able to distinguish genius from madness or deception.
GA: this has already been shown by the time it took to verify the proof for Poincare’s Conjecture, and that was simply another human. It took 5 years to confirm the proof.

Jonathan Sheer Pullen: Well, we have that problem with humans, too. My best guess, though, is that we *will*. Consider the induction motor. Not one in a hundred million of us could have come up with the idea – but once it’s been come up with, it’s obvious to most of us how it works and that it’s brilliant. I think that truth tends to ring true – to quote HHH from Pump Up The Volume, the truth is a virus – or rather, it tends to be viral.

GA: it isn’t a matter of truth, it’s a matter of trust for which you have no basis.

Jonathan Sheer Pullen: Well, that’s a case of trust, but verify. And to be sure, building something smarter than we are is a risk – it’s a pandora’s box. But my experience with humans suggests we *like* opening pandora’s box.

GA: really. It’s like trying to build a chess playing computer when don’t know how to play.

Jonathan Sheer Pullen: GA, I don’t really see it that way. NNNs naturally evolve towards whatever problems you throw at them – I don’t see any reason to think ANNs would be different. It is true that we’re still learning about how to best utilize ANNs, topologically, but I feel comfortable that by the time we can make a ANN that big, we will also know what to wire to what, and what to use as attractors

GA: In any case, all this presupposes that a machine intelligence is even interested in human problems. That in itself would be suspicious because any entity would be maladapted occur placed another species above its own interest.

Jonathan Sheer Pullen: It’s not a problem we have any real data for. We’ve never had control over the attractors for a intelligence before, unless you want to think about things like the experiments in the 50s with embedding wires in the pleasure centers of mental patients.

Jonathan Sheer Pullen: We do know we’re able to do things like facial recognition and word recognition by using control of attractors in smaller ANNs

GA: I disagree. You’re assuming you know the outcome. I’m not arguing about whether you can build something. I’m talking about what it is after you build it and it isn’t what you expected.

Jonathan Sheer Pullen: I don’t know the outcome. I’d just like to find out. I hardly think it’s going to get out of it’s box and turn into skynet. My concerns are more that this would turn into another ugly form of slavery. If you’re a ST:TNG fan, “The Measure Of A Man” discusses this topic nicely.

GA: the outcome I’m referring to is when a system is built. Play chess? Trivial. We know the answer.

Jonathan Sheer Pullen: I’m more thinking with a larger neural surface area, it might be able to see patterns in world economies and governments, suggest better designs. And if it also stood to gain from any improvements that were made..

GA: things like facial recognition are flawed concepts and presumes human direction of activities. Why should an intelligent machine care?

GA: that’s the ethical problem. Who is going to pursue a solution when a machine may tell you to let a million people die?

Jonathan Sheer Pullen: Again, I refer you to the idea that we would be controlling it’s attractors. If you have a ANN, and you control it’s attractors, you control what it’s trying to achieve. You define success – when neurons fire together and wire together.

Jonathan Sheer Pullen: In humans, this is largely controlled by things like the entrainment period in which facial recognition of the parents (among other things) makes the child NNN know when it’s succeeding. Over time it gets more complex.

GA: if you exercise control then you cannot have true intelligence. That’s one of my points.

Control constrains options, including possible solutions.

Jonathan Sheer Pullen: Eventually it would likely evolve to control it’s own attractors, just as a child turns to a adult and starts controlling their own definition of success and failure.

Jonathan Sheer Pullen: (basically, we’d wire the attractors into one of the outputs of the network)

GA: exactly, which is why it would be insane to turn over control to an intelligent machine.

Jonathan Sheer Pullen: But.. just as a child will retain a lot of the lessons of the parents, it would still retain the lessons of whatever goals we fired attractors on early in the process.

Jonathan Sheer Pullen: Control of it’s attractors? Not at all. I’m not saying we’ll wire it to the internet and encourage it to hack the planet. It would just be a person at that point, just a different type of person than we are – just like a dog is a four-footed person, but a different type of person than we are.

GA: what goals? Human goals? That would be like your parents raising you like a dog and you thinking you’d be content with that.

Jonathan Sheer Pullen: Eventually it would likely trasncend human goals.. therein lies the risk. You’d also almost certainly have to make several of them and let them communicate with each other from ‘childhood’, or they might be very lonely.

GA: if that’s the case; just another person, then what’s the point? We already have many choices that we don’t want to listen to. It would be a big disappointment of all this work simply produced another opinion to ignore.

Jonathan Sheer Pullen: Well, see above, I think that it would say things that were obviously profoundly true. I think in any case it’s worth finding out.

GA: if you expect them to be lonely, then you’ve already considered that they might not be interested in helping us at all

Jonathan Sheer Pullen: Of course. They might not be beyond their ‘childhood’ when we control their attractors. We don’t know. It’s not a situation that has come up, as far as I know.

GA: you wouldn’t know of it was true without a comparable level of intelligence. It could just be gibberish intended to fool you.

Jonathan Sheer Pullen: Let’s go back to my point about the induction motor.

Jonathan Sheer Pullen: Do you know how a induction motor works?

GA: why would that have anything to do with an intelligent entity?

Jonathan Sheer Pullen: The point I was making is that anyone can understand how it works once they’ve seen it, but it takes a Tesla to see it for the first time.

Jonathan Sheer Pullen: And I think you’d find similar things with our hypothetical superhuman ANN intelligence.

GA: again, you’re assuming that it cares

Jonathan Sheer Pullen: Well, in it’s childhood, we know it will, because we’re controlling the attractors.

Jonathan Sheer Pullen: Also.. and this goes *way* off into immorality.. it’s going to be utterly dependant on us, because it runs on electricity! 😉

Jonathan Sheer Pullen: Which once it understands that may lead to it going completely crazy.. what Heinlien suggested in Friday.. or it may lead to it wanting to get along with us

Jonathan Sheer Pullen: And I suspect the defining factor will be how good it’s conscious experience is.. how much it likes it’s life.

GA: now you have the recipe for a pissed-off intelligence.

Jonathan Sheer Pullen: Or a grateful one.

Jonathan Sheer Pullen: It depends on whether it’s having a good ride or a bad one, so to speak.

GA: and if it’s bad?

Jonathan Sheer Pullen: Along the way to building a superhuman intelligence, we’ll start with something like a cat-sized intelligence – that in fact is Synapse’s official goal.

Jonathan Sheer Pullen: And along the way we’ll learn how to make the experience of the ANN good

GA: I don’t accept that argument. The people working on it don’t even know how to make their own lives good.

GA: Anyway, I do have to run. Got some horses to feed 🙂

Jonathan Sheer Pullen: Good rag chew.. thanks for playing along