Will Kolmogorov-Arnold Networks (KAN) be used to win a prize in a Kaggle competition before 2025?
➕
Plus
24
Ṁ1079
Jan 1
13%
chance

Will a solution based on KAN achieve a prize contender place on the leaderboard at the end of a Kaggle competition before 2025?

Get
Ṁ1,000
and
S1.00
Sort by:
bought Ṁ10 YES

See also https://manifold.markets/CDBiddulph/will-a-sota-model-be-trained-with-k for a longer term market about state-of-the-art models.

Will a SOTA model be trained with Kolmogorov-Arnold Networks by 2029?
17% chance. Kolmogorov-Arnold Networks (KANs) are a new kind of neural network, which were proposed in the paper KAN: Kolmogorov-Arnold Networks. From the abstract: Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. If KANs are truly better than MLPs, we should expect that AI labs will start using them when they try to train SOTA models. If this is confirmed to happen by 2029, this market resolves YES, otherwise NO. Qualifying models Uses KANs A model must use Kolmogorov-Arnold Networks to qualify, or some other kind of neural network that clearly descends from KANs. If the resolution date comes around and we don't know whether or not a SOTA model uses KANs, the market will resolve NO. State-of-the-art If the model is close to the state-of-the-art on a commonly-used eval, that's good enough to count. For instance, Gemini Ultra supposedly got a state-of-the-art, 90.0% score on MMLU, but some argued that they were gaming the eval. Rather than getting into the weeds figuring out whether or not a model counts as SOTA, I'll err on the side of counting it, since the important thing is that a serious attempt was made at achieving SOTA using KANs. If the model focuses on a new domain that doesn't yet have commonly-used evals and it is clearly better than anything that came before, that also counts as state-of-the-art. Past examples of AI models that qualify as "state-of-the-art" would include AlexNet, AlphaZero, GPT-3, DALL-E 2, Sora, Gemini Ultra, Suno, Udio, and Claude 3 Opus. [link preview]