ggml-org / llama.cpp
- понедельник, 12 мая 2025 г. в 00:00:02
LLM inference in C/C++
Roadmap / Project status / Manifesto / ggml
Inference of Meta's LLaMA model (and others) in pure C/C++
llama-server
: #12898 | documentationllama-mtmd-cli
is introduced to replace llava-cli
, minicpmv-cli
, gemma3-cli
(#13012) and qwen2vl-cli
(#13141), libllava
will be deprecatedllama-server
#9639The main goal of llama.cpp
is to enable LLM inference with minimal setup and state-of-the-art performance on a wide
range of hardware - locally and in the cloud.
The llama.cpp
project is the main playground for developing new features for the ggml library.
Typically finetunes of the base models below are supported as well.
Instructions for adding support for new models: HOWTO-add-model.md
(to have a project listed here, it should clearly state that it depends on llama.cpp
)
Backend | Target devices |
---|---|
Metal | Apple Silicon |
BLAS | All |
BLIS | All |
SYCL | Intel and Nvidia GPU |
MUSA | Moore Threads MTT GPU |
CUDA | Nvidia GPU |
HIP | AMD GPU |
Vulkan | GPU |
CANN | Ascend NPU |
OpenCL | Adreno GPU |
RPC | All |
The main product of this project is the llama
library. Its C-style interface can be found in include/llama.h.
The project also includes many example programs and tools using the llama
library. The examples range from simple, minimal code snippets to sophisticated sub-projects such as an OpenAI-compatible HTTP server. Possible methods for obtaining the binaries:
llama.cpp
via brew, flox or nixThe Hugging Face platform hosts a number of LLMs compatible with llama.cpp
:
You can either manually download the GGUF file or directly use any llama.cpp
-compatible models from Hugging Face or other model hosting sites, such as ModelScope, by using this CLI argument: -hf <user>/<model>[:quant]
.
By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable MODEL_ENDPOINT
. For example, you may opt to downloading model checkpoints from ModelScope or other model sharing communities by setting the environment variable, e.g. MODEL_ENDPOINT=https://www.modelscope.cn/
.
After downloading a model, use the CLI tools to run it locally - see below.
llama.cpp
requires the model to be stored in the GGUF file format. Models in other data formats can be converted to GGUF using the convert_*.py
Python scripts in this repo.
The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with llama.cpp
:
llama.cpp
in the cloud (more info: #9669)To learn more about model quantization, read this documentation
Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding -cnv
and specifying a suitable chat template with --chat-template NAME
llama-cli -m model.gguf
# > hi, who are you?
# Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today?
#
# > what is 1+1?
# Easy peasy! The answer to 1+1 is... 2!
# use the "chatml" template (use -h to see the list of supported templates)
llama-cli -m model.gguf -cnv --chat-template chatml
# use a custom template
llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:'
To disable conversation mode explicitly, use -no-cnv
llama-cli -m model.gguf -p "I believe the meaning of life is" -n 128 -no-cnv
# I believe the meaning of life is to find your own truth and to live in accordance with it. For me, this means being true to myself and following my passions, even if they don't align with societal expectations. I think that's what I love about yoga – it's not just a physical practice, but a spiritual one too. It's about connecting with yourself, listening to your inner voice, and honoring your own unique journey.
llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:'
# {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}
The grammars/ folder contains a handful of sample grammars. To write your own, check out the GBNF Guide.
For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/
llama-server -m model.gguf --port 8080
# Basic web UI can be accessed via browser: http://localhost:8080
# Chat completion endpoint: http://localhost:8080/v1/chat/completions
# up to 4 concurrent requests, each with 4096 max context
llama-server -m model.gguf -c 16384 -np 4
# the draft.gguf model should be a small variant of the target model.gguf
llama-server -m model.gguf -md draft.gguf
# use the /embedding endpoint
llama-server -m model.gguf --embedding --pooling cls -ub 8192
# use the /reranking endpoint
llama-server -m model.gguf --reranking
# custom grammar
llama-server -m model.gguf --grammar-file grammar.gbnf
# JSON
llama-server -m model.gguf --grammar-file grammars/json.gbnf
llama-perplexity -m model.gguf -f file.txt
# [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ...
# Final estimate: PPL = 5.4007 +/- 0.67339
# TODO
llama-bench -m model.gguf
# Output:
# | model | size | params | backend | threads | test | t/s |
# | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |
# | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | pp512 | 5765.41 ± 20.55 |
# | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | tg128 | 197.71 ± 0.81 |
#
# build: 3e0ba0e60 (4229)
llama.cpp
models. Useful for inferencing. Used with RamaLama 3.llama-run granite-code
llama-simple -m model.gguf
# Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of
llama.cpp
repo and merge PRs into the master
branchIf your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS, and macOS. It can be used in Swift projects without the need to compile the library from source. For example:
// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "MyLlamaPackage",
targets: [
.executableTarget(
name: "MyLlamaPackage",
dependencies: [
"LlamaFramework"
]),
.binaryTarget(
name: "LlamaFramework",
url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
)
]
)
The above example is using an intermediate build b5046
of the library. This can be modified
to use a different version by changing the URL and checksum.
Command-line completion is available for some environments.
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash
Optionally this can be added to your .bashrc
or .bash_profile
to load it
automatically. For example:
$ echo "source ~/.llama-completion.bash" >> ~/.bashrc