llama.cpp fork with additional SOTA quants and improved performance
ik_llama.cpp: llama.cpp fork with better CPU performance
TL;DR
This repository is a fork of llama.cpp with better CPU and hybrid GPU/CPU performance, new SOTA quantization types, first-class Bitnet support, better DeepSeek performance via MLA, FlashMLA, fused MoE operations and tensor overrides for hybrid GPU/CPU inference, row-interleaved quant packing, etc.
May 22 2025: Refactor iqk_mul_mat.cpp which speeds up compilation time significantly. PR 435
May 17 2025: Option to enable or disable the CPU FA kernels PR 429.
May 12 2025: User can now control if/which operations with tensors held in RAM are offloaded to the GPU. See PR 405
May 12 2025: Compatibility issues with mainline llama.cpp GGUFs for DeepSeek models with MLA enabled were resolved in PR 394. The lower prompt processing performance resulting from using llama.cpp-style MLA GGUFs was recovered in PR 409.
April 21 2025: ik_llama.cpp builds and runs successfully on Android (using termux), see PR 336
March 1 2025: Smart Expert Reduction for faster DeepSeek inference PR 239
Feb 25 2025: Tensor overrides for better control where model weights are stored (GPU or CPU) PR 232
Feb 23 2025: sweep-bench - better performance benchmarking PR 225
Feb 19 2025: Q8_KV - new type for 8-bit KV-cache quantization PR 208
March 7 2025: Custom quantization mixes using regular expressions PR 244
Performance improvements
May 13 2025: Better CPU FA performance for DeepSeek-Lite. PR 410
May 11 2025: Slightly faster flash attention for DeepSeek models on CUDA, along with extending compatibility to Touring or newer GPUs. PR 408
May 4 2025: Significant token generation performance improvement on CUDA with Flash Attention for GQA models. For details and benchmarks. PR 370
April 17 2025: Better CPU Flash Attention token generation performance. PR 332
April 3 2025: Much faster MoE implementation on Metal. PR 307
March 25 2025: Better MoE performance on CUDA PR 283
March 23 2025: Better batched processing speed for DeepSeek models PR 282
There is no single point of reference describing all new ik_llama.cpp features. Pull requests often contain detailed information, so browsing the PRs is often the best way to learn about new features and how to use them. In addition
The Wiki page has performance comparisons to mainline llama.cpp
This guide is a good place to start if you came here because of DeepSeek models
This discussion is about running DeepSeek-V3/R1 on a 16 x 3090 setup
This discussion describes the new quantization types available in ik_llama.cpp
Contributing
Contributions in form of pull requests, issue submissions (bug reports, feature requests), or general discussions, are welcome.