Did Gemini 1.5 Pro achieve long-context reasoning through retrieval?
Mini
1
Ṁ44Jan 1
50%
chance
1D
1W
1M
ALL
There is no way an attention network is that good.
1 hour video understanding
Needle in a Haystack 99% accuracy
Learning a language that no one speaks by reading a grammar book "in context"
Resolves YES if we later found out that the long context ability was enhanced by agents/retrieval/search/etc., i.e. it was not achieved merely by extending attention mechanism.
Resolves NA if I can't find out by EOY 2024
Get
1,000
and1.00
Related questions
Related questions
Will Gemini 1.5 Pro seem to be as good as Gemini 1.0 Ultra for common use cases? [Poll]
70% chance
Will Gemini outperform GPT-4 at mathematical theorem-proving?
62% chance
Will Google Gemini perform better (text) than GPT-4?
55% chance
Will Gemini achieve a higher score on the SAT compared to GPT-4?
70% chance
Will Gemini Ultra outperform GPT-4V on visual reasoning by the end of 2024?
59% chance
Is the model Gemini Experimental 1206 an early version of what will be Gemini 2 Pro?
55% chance
Will Gemini exceed the performance of GPT-4 on the 2022 AMC 10 and AMC 12 exams?
72% chance
Will GPT-5 have function-calling ability to some o1-like reasoning model, upon release?
35% chance
What will be true of Gemini 2?
Does Google Gemini have more than 500B parameters per expert?
12% chance