
How will people run DeepSeek Coder v2 236B locally by 2025?
Plus
0
Jan 2
1D
1W
1M
ALL
50%
Lobotomy levels of quantization (e.g. Q2_K)
50%
Unified memory (e.g. M3 Ultra, Mac Studios)
50%
Non-GPU main memory (e.g. AMD EPYC with 512GB DDR5)
50%
Gaming GPUs in one motherboard (e.g. 4090s)
50%
Tall clustering (e.g. Mac Studios over Thunderbolt)
50%
Wide clustering (e.g. Petals)
50%
HBM FPGA/ASIC dark horse (e.g. AMD rains Versal chips like manna)
DeepSeek Coder v2 is arguably a frontier model from a stellar Chinese team. It has cutting edge efficiency tricks, improved reasoning thanks to extensive code pretraining, and what looks like an excellent math corpus.
It's also an absolute chonker despite being MoE.
What's going to be the path for GPU poors needing to run LLMs in "ITAR mode" without building their own private datacenter or buying NVIDIA racks?
Resolves to as many options that apply, based on public information. If there is no public information resolves based on vibes and a DIY pricing spreadsheet.
(Chinese overlords variant of Llama 3 405B question in case Meta gets cold feet.)
Get
1,000and
1.00
Related questions
Related questions
will deepseek-v4 destroy all other models?
14% chance
Will DeepSeek R2 be open source?
93% chance
Will there be an open replication of DeepSeek v3 for <$10m?
32% chance
V4 (DeepSeek) release date
Will DeepSeek V3.2 get >145 on the Epoch Capabilities Index (ECI)?
39% chance
When will Deepseek V4 be released?
-
will DeepSeek become a closed AI lab by EOY?
12% chance
R2 (DeepSeek) release date
Will I believe in 1 year that DeepSeek R1 was substantially trained via distillation of a US model?
64% chance
Will DeepSeek's next reasoning model be open-sourced?
83% chance