Is consciousness an emergent property of language?
➕
Plus
17
Ṁ691
2030
27%
chance

This question will resolve yes if AGI is achieved using LLMs and there is at least some philosophical consensus that language (with enough relational complexity) is sufficient to give rise to human-level reasoning.

Very open to alternate/more rigorous proposals for resolution.

At the very least, I'm curious as to what people think.

Get
Ṁ1,000
and
S1.00
Sort by:

If i could i would resolve this as not, what are you even thinking.

My 2 cents is that language is an emergent phenomenon of sufficiently advanced consciousness, but not vice versa. I do not think we had language when the first human-like organisms evolved, however we developed language over time to better communicate and socialize. I would argue an ape or a dolphin is conscious as well, and may also have developed some form of communication, but I think a goldfish or a frog are also clearly conscious, but lack a language.

Can you define consciousness in the context of this question?

@NoaNabeshima

I'll work on it...my personal definition consciousness is an experience of "self-awareness" where "self" is the Kierkegaardian relational self, meaning a stable entity capable of relating its conception of itself to other concepts.

[In more understandable terms, where the "self" is the thing that can keep track of me, Res, going about my daily business and not confusing "Res" with a video game avatar (ie Tav, when I play Baldur's Gate).]

I realize none of the above translates all that well to a Manifold Market.

However, as a stand-in for "consciousness" what do you think about an LLM managing to achieve "Level 2" General AGI from the Google Deep Mind group: https://arxiv.org/pdf/2311.02462.pdf

To me, this would require reasoning, and the ability to project the self (the relational entity) through time.

predicts NO

@ResidentMetaphysician a level 2 AGI from the paper will probably not have the ability to project the "self" through time (it lives and dies during its response to the prompt, ), whatever the definition of self you used still seems to be different from consciousness, I would say that gpt3 had consciousness by that definition. Certainly it has some sentience.

https://youtu.be/sY_5irZNZ0o?si=p1rWcx03uHguOZ93

What do you think about this David Shapiro video, about half way through it gets relevant to you

@VAPOR I'll watch the video; in the meantime...how do you define "consciousness" versus "sentience?" and why do you think that a level 2 general AGI won't be able to project a self-concept through time?

predicts NO

@ResidentMetaphysician "a stable entity capable of relating its conception of itself to other concepts." was your definition, gpt3 is this, but only while creating a reply during a prompt, then it dies. It is not conscious in any way at all unless it's during a reply, even then it's debatable what counts as conscious/ness. Consciousness requires that level 2 AGI be always alive, always thinking, instead it dies after every prompt.

I thought I explained it in my previous reply, so sorry if I'm just repeating myself here. Keep watching

Consciousness seems to be a different thing than language, and different from reasoning human level or otherwise.

Interesting question though, you imply AI reasoning could result from simply training language. That sounds like Eliezer Yudkowsky's recent question on here in a way. Go find it on his profile. I've seen it claimed that LLM's already reason, but I don't know enough to agree or disagree.

I think the title invalidates the description. David Shapiro on YouTube has videos on cognitive architecture in AI. You would find it interesting, look up his earlier videos on the subject, you'll get a better definition of consciousness in AI.