Is solipsism true such that I am the only observer?
➕
Plus
27
Ṁ690
10000
21%
chance

Resolves once it's possible for both me and others to verify, or N/A if that's impossible

My prior on solipsism being true for me is probably higher than yours, which would focus on yourself, given you would have access to your observations. Therefor, it's a unique opportunity for me (and you) to profit in our expectations.

Get
Ṁ1,000
and
S1.00
Sort by:

What the fuck you expect me to answer?

I agree this market is slightly confused.

I will avoid arguing about priors for a moment to instead concentrate on what you're imagining as a condition for resolving this market.

If I am correct, your idea is that apriori you have no reason to expect you're not the only conscious experience in existence.

I'd agree tentatively to the proposition that in some abstract sense an ideal agents initial distribution apriori would not put any special weight on possibilities of its causal closure containing other agents with their own reflection of whatever cartesian boundary that agent has with the reality it's embedded in.

  1. Ideal agent: abstract concept of something which behaves as an agent but does not generate any dominated strategies or fail fundamental sanity-checks

  2. Apriori: Modulo any validity to "a mind is created already in motion" - we can still abstractly represent a condition of the agent's probability distribution sans any updates. I think the ideal prior would be the Solomonoff Prior? I think there is a free variable in what Turing Machine formulation you use, which is important for probability calculations because I think you'd want to use program-length as a basic complexity penalty... But maybe there's a way to calculate the penalty over the space of possible Turing Machine formulations... This is at the edge of my ability to reason with my current mastery, but Eliezer has written on these definitions and concepts extensively, and I'm probably just forgetting the two sentences that make the concept click into place and failing to regenerate that missing memory from other clues.... 😮‍💨

  3. Causal closure: I may be misusing this term, but I've decided I'm going to post this reply before googling stuff as an exercise, so I'll just define how I'm using it. Causal closure would be every distinct event that can become entangled with the agent with "causal arrows" moving in either direction. The way I might be misusing it is that it may refer to only causal arrows which descend from the agent? The way I'm using it is mostly "whatever pieces of reality touch the agent, or that the agent touches, which may not be the same set of things, but which both have potential meaning for the agent."

  4. Cartesian boundary: the abstract division between "environment" and "agent." I'm pretty sure I need to include this part if we're talking about solipsism, since I don't think you can talk about the concept without representing a distinction between "you" and "your environment." Like - you have an experience, but you don't experience the entire chain of cause and effect which picks out that specific experience over other experiences, your mind doesn't entirely generate your own reality, it's doing it with reference to something else which isn't directly accessible to you.

I think this is the set of concepts you kinda need to approach thinking about solipsism - but we can probably pump some intuitions from weaker footing until you see how everything kinda takes a walk back to the simple truth of "yeah, there are probably other things like you put there beyond your cartesian boundary with the universe."

Well, not all of them, and I'm probably skipping over important inferential steps and definitions and formulas... This isn't a MIRI quality writeup, unfortunately, otherwise I'd probably be working there.

But lemme gesture at the intuitions for a while.

You come into the world as an infant - you know, things were happening in the stuff running your mind before there was a you, then there was a you (you don't get a literally infinite history, since to find yourself in your experience an infinite amount of conditions/computations/causal nodes would've needed to precede you, the set isn't "well-founded," there's always a 0th thing in the history behind your experience.)

Infant You looks around and ((disjunction here)) Ideal-Infant You locates a steadily narrowing region of possible computable programs within the Solomonoff Prior consistent with its experience, and continuously updates a probilility distribution where the priors are some complexity penalty over the whole space of possible programs given some specification of a Turing Machine within which the programs are represented.

((Or, actually)) Real-Infant You uses evolved brain circuitry to latch onto consistencies in their sensory experience, generating a map with all the squishy pre-syntactic-language pre-object-permanence pre-linguistically-updated-interpretations that human babies have. My guess, a world basically made out of intensely good or bad collections of sensations, visual fields, textures, etc.

Then you grow up, and you start filling in all of those blanks between Infant You and Adult You, all the stuff about other people and the world and what you like and don't like and who and what you even are... Math, history, philosophy, science, 🙃 [whoops a ton of bad ideas spilled here accidentally] 🙃, and a bunch of memories and habits and paths through your history of imagination and ideation.

And... At the end of this... You're supposed to think about your "prior on solipsism?" If you accept that reality actually looks like your best guess at what it looks like to the degree that you use that guess to make predictions - then yeah, you're a typical human in a sea of other typical humans.

Like... Would you expect to see anything different on an fMRI if you appeared to be taking one right after an entity who appeared to make a noise identifying itself as me went under that fMRI machine... If you don't expect to see something dramatically different, like a screen blanked out by an invisible radiation pouring into your grey matter from a higher plane, then why do you doubt the idea that others aren't having internal experiences?

... I suspect this is a case where someone hasn't asked themselves whether their beliefs imply experiencing anything different in reality? Like, the set of beliefs in a world model which actually makes it a useful world model end up predicting things, and, in our universe, predicting things using simple and reliable relationships between them.

Like, what do you think generates all this talk about experience? You didn't see yourself inside your own mind writing this (←↑↓→) specific comment on your post, what is your guess about the characteristics of the process which generated this comment? Can those characteristics actually work to predict other facts about the world? As in, like, what sort of words you'd read if you clicked/tapped/navigated to other stuff which causes to enter your visual field a sensory experience identifiable as the "username" feature from your memory of reading this comment?

I probably won't follow up immediately with a bunch of references and clearer arguments. I am kinda just testing my ability to explain stuff like this from off the top of my head here.

I will still probably follow up because this question actually being the subject of a prediction market (even if it's manifold... 🙃) has kinda fascinated me.

@NevinWetherill Just to emphasize this:

WHAT the HECK do you expect to resolve this market if your ENTIRE HISTORY of experience up until now is not sufficient evidence to conclude "yeah, there's probably other stuff having internal experiences like mine somewhere out there beyond my cartesian boundary with reality" ????

@NevinWetherill i think my other reply addressed this but i only skimmed it

Your prior is already (likely) paradoxical as it refers to your priors relative to those of others. But to be solipsistic is to either:

  1. presume a lack of others' having priors

  2. Or to assume the priors of others are not updatable via observation, and therefore always stay the same,

  3. or to assume that both your and others' priors are nonobservational and mechanical in which case no one could observe their own prior, making it odd to have comparative claims of priors of the likelihood of others'priors,

  4. or to presume your priors are different in nature than the priors of others, in which case your priors would not reasonably include a likelihood that yours are "higher" than theirs without incurring a paradox or inconsistency by comparing incomparable elements.

Note that I am assuming that by solipsism you mean that your prior for your own existence is 100%, that your prior for whether or not others share this exact prior is less than 100%, and that by your prior being higher than that of another, you mean that theirs is less even if you cannot put yours to a number.

I would also note that a public claim of solipsism is likely inconsistent as there is no reason to explore its possibility if the belief is thought to be true. To allow the responses of others to affect your priors, thoughts or feelings can only consistently bend away from solipsism. If however you were leaning into solipsism and only committing to the idea that questions of solipsism can be used to influence others despite their lack of observational phenomena, then it might be likewise reasonable to suspect your own motivations for conveniently adopting solipsism as well as question your strategy for influencing nonobervational agents since such manipulations better fit a mechanistic nonobserver agent than a reflective and curious nature.

@alieninmybeverage Solipism being true for me, in a way where the question resolves to yes, roughly equals "you're all p-zombies and this is some sort of simulation or statistical process governing my experience, but you otherwise can still be described as having functional belief states, or if not, are part of a simulation that will make you act as if you do in a coherent way even after it's discovered that you're a hologram of the simulation."

These possibilities allow for 'mutual verification' even if there's not actually an experience (or functionalist mind) on the other side. Other possibilities, like 'the simulation's holograms suddenly disappear iff it's confirmed', would not.

(Note: simulation isn't the only possibility, and if there's true experiencing observers outside the simulation, I wouldn't consider that to be solipsism being true. Simulation is just the closest word I can find to whatever the unknown ontology of a solipsistic world would be.)

The question makes no sense, because it doesn’t specify for whom you should be the only observer! But I can certainly guess…

bought Ṁ10 NO

I've been feeling very real and not like a zombie lately so if this resolves yes I will be unhappy