If the hard problem of consciousness is solved, what will be true of it?
➕
Plus
59
Ṁ5904
2050
81%
It will allow to predict in advance whether a given intelligence architecture will be sentient
73%
Retrospectively, the solution will look relatively trivial and such that it could have been found much earlier by much smaller efforts
63%
It will be solved by AI
63%
It will open a new seemingly unsolvable problem
60%
The methods developed to solve the problem will be proven to be useful to approach other seemingly unsolvable problems (such as why does reality exist rather non-exist?)
60%
It will be shown that for any level of intelligence, a non-sentient intelligence can exist
58%
It will be solved within the standard modern scientific framework and scientific method understanding
51%
It will be shown that every intelligence is sentient to some extent
49%
The process of understanding its solution by a median human will take less time than it takes for a median human to successfully complete undergrad 1-year calculus course
44%
The solution will be formally accepted, but the consensus will be that no one understands the “intuition behind it”
40%
It will be shown that the higher is the level of intelligence, the harder it is to construct a non-sentient intelligence
40%
The full set of all possible qualia will be derived from it
37%
It will be shown that some sufficiently high level of intelligence requires sentience
37%
It will require some paradigm shift in logic/epistemology/scientific method
22%
It will be formally and unambiguously proven that the problem is unsolvable

I will resolve the market according to my best judgement, but of course there must be scientific consensus that the problem is solved.

New options may be added later.

Get
Ṁ1,000
and
S1.00
Sort by:
reposted

I don’t have the mana to do it, but I’ll greatly appreciate if someone makes a numerical market on WHEN the hard problem of consciousness will be solved

Do you want a literal poll, or a numerical market?

I meant a numerical market, I mistyped

The question concerns consciousness, but then many of the answers revolve around intelligence and sentience and don't even mention consciousness. It seems like a lot is being assumed there, given that this question wants to get at the true nature of a massive unknown. What's the framework that necessarily relates the three together?

bought Ṁ10 It will require some... YES

What does "in advance" mean? Does it require a purely "first principles" proof based on math alone, or can it just be that we have enough empirical evidence of a pattern that anything which fits is conscious?

@JessicaEvans It means that for a given intelligence, we are in principle able to detect sentience if we have access to its inner working. Or, let’s say, if we construct an AI, we know whether this architecture will be sentient. There must be some model that works, and how exactly it will look like is irrelevant here.

bought Ṁ5 It will be formally ... YES

Which option is meant to capture illusionists, eliminativists, quietists, and other people who deny there is a hard problem in the first place? It seems like, "It will be formally and unambiguously proven that the problem is unsolvable," could, but I don't think most denialists would say that the hard problem formally and unambiguously doesn't exist because it's not clear what terms like "exist" mean. Most or all of the options seem to assume the hard problem is a real and well-defined problem in the first place.

For the record, I'm very confident denialism is correct. The hard problem is a pseudo-problem and will either be reified or follow the path of life (e.g., élan vital), fire (e.g., phlogiston), and so on in being accepted as only existing as a gesture in idea space rather than carving reality at the joints.

@Jacy I observe my sentience and consider it one of the most certain facts I am aware of. It does not make any sense for me to use a market to answer the question of whether my belief about sentience is correct because I am confident in the existence of my sentience much, much more than in the existence of the market (in terms of odds of course, probabilities are very similar and are almost 1). Moreover, non-existent problem cannot be solved, and this market conditions on the problem being solved. If the problem does not exist, the solving event cannot happen. Existence means that people observe their sentience (and adjacent things) and wonder how it appears.

@IhorKendiukhov I observe my sentience too; nonetheless I think most of the questions in this market are fundamentally confused.

Sentience, as a generalized concept applied to entities in the broader world (such as other people, animals, etc.), involves taking our own self-models ("I'm having experiences of the world!") and trying to project those self-models onto things outside of ourselves. This makes sense in the context of e.g. interactions with other humans: they model themselves similarly, and so I can say that my model of my experiences and your model of your experiences are meaningfully modeling the same sort of thing, and call that thing 'sentience'. But this is ultimately a matter of map, not territory, and in fact it seems very unlikely to me that there is an answer in the territory regarding when a given cognitive architecture is sentient or not: the things I perceive are different from the things you perceive, even if there's a useful equivalence-class of 'experiences' to be drawn around them, and the same goes for the things perceived by every other intelligence out there. That equivalence-class's boundaries are essentially arbitrary; even once we know exactly what a given intelligence is perceiving, there's still going to be a floating question of whether those perceptions are relevantly instances of the 'experiences' category, and thus of whether or not that mind is sentient. Thus, where to draw the boundaries of whether a given mind is sentient isn't a question whose answer we're going to discover outside of ourselves any more than where to draw the boundaries of, for instance, whether a given object is huge: we can measure its size as precisely as we want, but even once we have its size measured there's still a separate question of whether to count that measured size as an instance of the 'huge' category, and we're never going to answer that question empirically, only by definition-setting. (And different people are predictably going to set different definitions.)

For the record, I'm very confident denialism is correct.

@Jacy Does this belief affect your behavior/choices/priorities, or no?

Related:

Option suggestions are welcome.

bought Ṁ100 Retrospectively, the... NO

I think many people have hypotheses and some will look pretty close to what we come up with as the final solution as simplified to a layperson, but with justifications that are extremely complex if you want to grasp the underlying mechanisms. So idk whether ‘a median human understanding its solution’ is precise, when the solution can be explained at many different levels of abstraction