Will tweaking current Large Language Models (LLMs) lead us to achieving Artificial General Intelligence (AGI)?
➕
Plus
26
Ṁ931
2030
18%
chance

Current LLMs like GPT have shown remarkable abilities in language processing and generation. However, AGI entails a machine's ability to understand, learn, and apply intelligence across a wide range of tasks at a human-like level.

  • Option 'Yes' will be considered correct if:

    1. An AI based on the current LLM framework demonstrates the ability to perform diverse tasks across multiple domains without task-specific training.

    2. The AI shows understanding and application of common sense, reasoning, and problem-solving skills at a level comparable to an average human adult.

    3. The AI passes a recognized AGI benchmark, such as a general Turing Test without prior domain-specific training.

  • Option 'No' will be considered correct if:

    1. A new model or technology, not based on the current LLM framework, is required to pass an AGI benchmark test.

    2. The AI community reaches a consensus that AGI cannot be achieved by merely refining existing models and requires a fundamentally different approach or technology.

    3. There is a scientific breakthrough in AI that leads to AGI, which clearly delineates from the path of current LLMs.

Using other models for multimodal tasks will not affect the result. But the decision making and problem solving should go through LLM part.

Duration will be extended as needed.

Get
Ṁ1,000
and
S1.00
Sort by:

What does “based on the current LLM framework” mean? If a new architecture still contains significant self-supervised pretraining based on a large corpus of language, but among other modalities and with explicit representation of goals, plans, short-term memories etc. would you count it as “LLM framework”?

@ML basically if the core of the solution is still LLM it counts. adding memory, peripheral models for images audio etc. running decition making matrix or evaluators around output, running smart prompt flows like dividing request sub items and rerunning it for subitems etc. these all qualify as LLM. because the solution is still revolving around core LLM solution. if say google ends up with AGI using alphago model will mean that LLM was not the way to go. even if this solution includes creating text output using an LLM still doesnt count. because this will mean that the AI core is not LLM.