volcengine / OpenViking
- воскресенье, 15 марта 2026 г. в 00:00:02
OpenViking is an open-source context database designed specifically for AI Agents(such as openclaw). OpenViking unifies the management of context (memory, resources, and skills) that Agents need through a file system paradigm, enabling hierarchical context delivery and self-evolving.
English / 中文
Website · GitHub · Issues · Docs
👋 Join our Community
📱 Lark Group · WeChat · Discord · X
In the AI era, data is abundant, but high-quality context is hard to come by. When building AI Agents, developers often face these challenges:
OpenViking is an open-source Context Database designed specifically for AI Agents.
We aim to define a minimalist context interaction paradigm for Agents, allowing developers to completely say goodbye to the hassle of context management. OpenViking abandons the fragmented vector storage model of traditional RAG and innovatively adopts a "file system paradigm" to unify the structured organization of memories, resources, and skills needed by Agents.
With OpenViking, developers can build an Agent's brain just like managing local files:
Before starting with OpenViking, please ensure your environment meets the following requirements:
pip install openviking --upgrade --force-reinstallcurl -fsSL https://raw.githubusercontent.com/volcengine/OpenViking/main/crates/ov_cli/install.sh | bashOr build from source:
cargo install --git https://github.com/volcengine/OpenViking ov_cliOpenViking requires the following model capabilities:
OpenViking supports three VLM providers:
| Provider | Description | Get API Key |
|---|---|---|
volcengine |
Volcengine Doubao Models | Volcengine Console |
openai |
OpenAI Official API | OpenAI Platform |
litellm |
Unified access to various third-party models (Anthropic, DeepSeek, Gemini, vLLM, Ollama, etc.) | See LiteLLM Providers |
💡 Tip:
litellmsupports unified access to various models. Themodelfield must follow the LiteLLM format specification- The system auto-detects common models (e.g.,
claude-*,deepseek-*,gemini-*,hosted_vllm/*,ollama/*, etc.). For other models, use the full prefix according to LiteLLM format
Volcengine supports both model names and endpoint IDs. Using model names is recommended for simplicity:
{
"vlm": {
"provider": "volcengine",
"model": "doubao-seed-2-0-pro-260215",
"api_key": "your-api-key",
"api_base": "https://ark.cn-beijing.volces.com/api/v3"
}
}You can also use endpoint IDs (found in Volcengine ARK Console:
{
"vlm": {
"provider": "volcengine",
"model": "ep-20241220174930-xxxxx",
"api_key": "your-api-key",
"api_base": "https://ark.cn-beijing.volces.com/api/v3"
}
}Use OpenAI's official API:
{
"vlm": {
"provider": "openai",
"model": "gpt-4o",
"api_key": "your-api-key",
"api_base": "https://api.openai.com/v1"
}
}You can also use a custom OpenAI-compatible endpoint:
{
"vlm": {
"provider": "openai",
"model": "gpt-4o",
"api_key": "your-api-key",
"api_base": "https://your-custom-endpoint.com/v1"
}
}LiteLLM provides unified access to various models. The model field should follow LiteLLM's naming convention. Here we use Claude and Qwen as examples:
Anthropic:
{
"vlm": {
"provider": "litellm",
"model": "claude-3-5-sonnet-20240620",
"api_key": "your-anthropic-api-key"
}
}Qwen (DashScope):
{
"vlm": {
"provider": "litellm",
"model": "dashscope/qwen-turbo", // see https://docs.litellm.ai/docs/providers/dashscope for more details
"api_key": "your-dashscope-api-key",
"api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
}💡 Tip for Qwen:
- For China/Beijing region, use
api_base:https://dashscope.aliyuncs.com/compatible-mode/v1- For International region, use
api_base:https://dashscope-intl.aliyuncs.com/compatible-mode/v1
Common model formats:
| Provider | Model Example | Notes |
|---|---|---|
| Anthropic | claude-3-5-sonnet-20240620 |
Auto-detected, uses ANTHROPIC_API_KEY |
| DeepSeek | deepseek-chat |
Auto-detected, uses DEEPSEEK_API_KEY |
| Gemini | gemini-pro |
Auto-detected, uses GEMINI_API_KEY |
| Qwen | dashscope/qwen-turbo |
Set api_base based on region (see above) |
| OpenRouter | openrouter/openai/gpt-4o |
Full prefix required |
| vLLM | hosted_vllm/llama-3.1-8b |
Set api_base to vLLM server |
| Ollama | ollama/llama3.1 |
Set api_base to Ollama server |
Local Models (vLLM / Ollama):
# Start Ollama
ollama serve// Ollama
{
"vlm": {
"provider": "litellm",
"model": "ollama/llama3.1",
"api_base": "http://localhost:11434"
}
}For complete model support, see LiteLLM Providers Documentation.
Create a configuration file ~/.openviking/ov.conf, remove the comments before copy:
{
"storage": {
"workspace": "/home/your-name/openviking_workspace"
},
"log": {
"level": "INFO",
"output": "stdout" // Log output: "stdout" or "file"
},
"embedding": {
"dense": {
"api_base" : "<api-endpoint>", // API endpoint address
"api_key" : "<your-api-key>", // Model service API Key
"provider" : "<provider-type>", // Provider type: "volcengine" or "openai" (currently supported)
"dimension": 1024, // Vector dimension
"model" : "<model-name>" // Embedding model name (e.g., doubao-embedding-vision-250615 or text-embedding-3-large)
},
"max_concurrent": 10 // Max concurrent embedding requests (default: 10)
},
"vlm": {
"api_base" : "<api-endpoint>", // API endpoint address
"api_key" : "<your-api-key>", // Model service API Key
"provider" : "<provider-type>", // Provider type (volcengine, openai, deepseek, anthropic, etc.)
"model" : "<model-name>", // VLM model name (e.g., doubao-seed-2-0-pro-260215 or gpt-4-vision-preview)
"max_concurrent": 100 // Max concurrent LLM calls for semantic processing (default: 100)
}
}Note: For embedding models, currently
volcengine(Doubao),openai, andjinaproviders are supported. For VLM models, we support three providers:volcengine,openai, andlitellm. Thelitellmprovider supports various models including Anthropic (Claude), DeepSeek, Gemini, Moonshot, Zhipu, DashScope, MiniMax, vLLM, Ollama, and more.
👇 Expand to see the configuration example for your model service:
{
"storage": {
"workspace": "/home/your-name/openviking_workspace"
},
"log": {
"level": "INFO",
"output": "stdout" // Log output: "stdout" or "file"
},
"embedding": {
"dense": {
"api_base" : "https://ark.cn-beijing.volces.com/api/v3",
"api_key" : "your-volcengine-api-key",
"provider" : "volcengine",
"dimension": 1024,
"model" : "doubao-embedding-vision-250615"
},
"max_concurrent": 10
},
"vlm": {
"api_base" : "https://ark.cn-beijing.volces.com/api/v3",
"api_key" : "your-volcengine-api-key",
"provider" : "volcengine",
"model" : "doubao-seed-2-0-pro-260215",
"max_concurrent": 100
}
}{
"storage": {
"workspace": "/home/your-name/openviking_workspace"
},
"log": {
"level": "INFO",
"output": "stdout" // Log output: "stdout" or "file"
},
"embedding": {
"dense": {
"api_base" : "https://api.openai.com/v1",
"api_key" : "your-openai-api-key",
"provider" : "openai",
"dimension": 3072,
"model" : "text-embedding-3-large"
},
"max_concurrent": 10
},
"vlm": {
"api_base" : "https://api.openai.com/v1",
"api_key" : "your-openai-api-key",
"provider" : "openai",
"model" : "gpt-4-vision-preview",
"max_concurrent": 100
}
}After creating the configuration file, set the environment variable to point to it (Linux/macOS):
export OPENVIKING_CONFIG_FILE=~/.openviking/ov.conf # by defaultOn Windows, use one of the following:
PowerShell:
$env:OPENVIKING_CONFIG_FILE = "$HOME/.openviking/ov.conf"Command Prompt (cmd.exe):
set "OPENVIKING_CONFIG_FILE=%USERPROFILE%\.openviking\ov.conf"💡 Tip: You can also place the configuration file in other locations, just specify the correct path in the environment variable.
👇 Expand to see the configuration example for your CLI/Client:
Example: ovcli.conf for visiting localhost server
{
"url": "http://localhost:1933",
"timeout": 60.0,
"output": "table"
}After creating the configuration file, set the environment variable to point to it (Linux/macOS):
export OPENVIKING_CLI_CONFIG_FILE=~/.openviking/ovcli.conf # by defaultOn Windows, use one of the following:
PowerShell:
$env:OPENVIKING_CLI_CONFIG_FILE = "$HOME/.openviking/ovcli.conf"Command Prompt (cmd.exe):
set "OPENVIKING_CLI_CONFIG_FILE=%USERPROFILE%\.openviking\ovcli.conf"📝 Prerequisite: Ensure you have completed the configuration (ov.conf and ovcli.conf) in the previous step.
Now let's run a complete example to experience the core features of OpenViking.
openviking-serveror you can run in background
nohup openviking-server > /data/log/openviking.log 2>&1 &ov status
ov add-resource https://github.com/volcengine/OpenViking # --wait
ov ls viking://resources/
ov tree viking://resources/volcengine -L 2
# wait some time for semantic processing if not --wait
ov find "what is openviking"
ov grep "openviking" --uri viking://resources/volcengine/OpenViking/docs/zhCongratulations! You have successfully run OpenViking 🎉
VikingBot is an AI agent framework built on top of OpenViking. Here's how to get started:
# Option 1: Install VikingBot from PyPI (recommended for most users)
pip install "openviking[bot]"
# Option 2: Install VikingBot from source (for development)
uv pip install -e ".[bot]"
# Start OpenViking server with Bot enabled
openviking-server --with-bot
# In another terminal, start interactive chat
ov chatFor production environments, we recommend running OpenViking as a standalone HTTP service to provide persistent, high-performance context support for your AI Agents.
🚀 Deploy OpenViking on Cloud: To ensure optimal storage performance and data security, we recommend deploying on Volcengine Elastic Compute Service (ECS) using the veLinux operating system. We have prepared a detailed step-by-step guide to get you started quickly.
👉 View: Server Deployment & ECS Setup Guide
| Experimental Group | Task Completion Rate | Cost: Input Tokens (Total) |
|---|---|---|
| OpenClaw(memory-core) | 35.65% | 24,611,530 |
| OpenClaw + LanceDB (-memory-core) | 44.55% | 51,574,530 |
| OpenClaw + OpenViking Plugin (-memory-core) | 52.08% | 4,264,396 |
| OpenClaw + OpenViking Plugin (+memory-core) | 51.23% | 2,099,622 |
👉 View: OpenClaw Memory Plugin
👉 View: OpenCode Memory Plugin Example
--
After running the first example, let's dive into the design philosophy of OpenViking. These five core concepts correspond one-to-one with the solutions mentioned earlier, together building a complete context management system:
We no longer view context as flat text slices but unify them into an abstract virtual filesystem. Whether it's memories, resources, or capabilities, they are mapped to virtual directories under the viking:// protocol, each with a unique URI.
This paradigm gives Agents unprecedented context manipulation capabilities, enabling them to locate, browse, and manipulate information precisely and deterministically through standard commands like ls and find, just like a developer. This transforms context management from vague semantic matching into intuitive, traceable "file operations". Learn more: Viking URI | Context Types
viking://
├── resources/ # Resources: project docs, repos, web pages, etc.
│ ├── my_project/
│ │ ├── docs/
│ │ │ ├── api/
│ │ │ └── tutorials/
│ │ └── src/
│ └── ...
├── user/ # User: personal preferences, habits, etc.
│ └── memories/
│ ├── preferences/
│ │ ├── writing_style
│ │ └── coding_habits
│ └── ...
└── agent/ # Agent: skills, instructions, task memories, etc.
├── skills/
│ ├── search_code
│ ├── analyze_data
│ └── ...
├── memories/
└── instructions/
Stuffing massive amounts of context into a prompt all at once is not only expensive but also prone to exceeding model windows and introducing noise. OpenViking automatically processes context into three levels upon writing:
Learn more: Context Layers
viking://resources/my_project/
├── .abstract # L0 Layer: Abstract (~100 tokens) - Quick relevance check
├── .overview # L1 Layer: Overview (~2k tokens) - Understand structure and key points
├── docs/
│ ├── .abstract # Each directory has corresponding L0/L1 layers
│ ├── .overview
│ ├── api/
│ │ ├── .abstract
│ │ ├── .overview
│ │ ├── auth.md # L2 Layer: Full content - Load on demand
│ │ └── endpoints.md
│ └── ...
└── src/
└── ...
Single vector retrieval struggles with complex query intents. OpenViking has designed an innovative Directory Recursive Retrieval Strategy that deeply integrates multiple retrieval methods:
This "lock high-score directory first, then refine content exploration" strategy not only finds the semantically best-matching fragments but also understands the full context where the information resides, thereby improving the globality and accuracy of retrieval. Learn more: Retrieval Mechanism
OpenViking's organization uses a hierarchical virtual filesystem structure. All context is integrated in a unified format, and each entry corresponds to a unique URI (like a viking:// path), breaking the traditional flat black-box management mode with a clear hierarchy that is easy to understand.
The retrieval process adopts a directory recursive strategy. The trajectory of directory browsing and file positioning for each retrieval is fully preserved, allowing users to clearly observe the root cause of problems and guide the optimization of retrieval logic. Learn more: Retrieval Mechanism
OpenViking has a built-in memory self-iteration loop. At the end of each session, developers can actively trigger the memory extraction mechanism. The system will asynchronously analyze task execution results and user feedback, and automatically update them to the User and Agent memory directories.
This allows the Agent to get "smarter with use" through interactions with the world, achieving self-evolution. Learn more: Session Management
For more details, please visit our Full Documentation.
For more details, please see: About Us
OpenViking is still in its early stages, and there are many areas for improvement and exploration. We sincerely invite every developer passionate about AI Agent technology:
Let's work together to define and build the future of AI Agent context management. The journey has begun, looking forward to your participation!
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.