krillinai / KrillinAI
- пятница, 11 апреля 2025 г. в 00:00:01
A video translation and dubbing tool powered by LLMs, offering professional-grade translations and one-click full-process deployment. It can generate content optimized for platforms like YouTube,TikTok, and Shorts. 基于AI大模型的视频翻译和配音工具,专业级翻译,一键部署全流程,可以生成适配抖音,小红书,哔哩哔哩,视频号,TikTok,Youtube Shorts等形态的内容
Krillin AI is an all-in-one solution for effortless video localization and enhancement. This minimalist yet powerful tool handles everything from translation, dubbing to voice cloning,formatting—seamlessly converting videos between landscape and portrait modes for optimal display across all content platforms(YouTube, TikTok, Bilibili, Douyin, WeChat Channel, RedNote, Kuaishou). With its end-to-end workflow, Krillin AI transforms raw footage into polished, platform-ready content in just a few clicks.
🎯 One-Click Start - Launch your workflow instantly,New desktop version available—easier to use!
📥 Video download - yt-dlp and local file uploading supported
📜 Precise Subtitles - Whisper-powered high-accuracy recognition
🧠 Smart Segmentation - LLM-based subtitle chunking & alignment
🌍 Professional Translation - Paragraph-level translation for consistency
🔄 Term Replacement - One-click domain-specific vocabulary swap
🎙️ Dubbing and Voice Cloning - CosyVoice selected or cloning voices
🎬 Video Composition - Auto-formatting for horizontal/vertical layouts
The following picture demonstrates the effect after the subtitle file, which was generated through a one-click operation after importing a 46-minute local video, was inserted into the track. There was no manual adjustment involved at all. There are no missing or overlapping subtitles, the sentence segmentation is natural, and the translation quality is also quite high.
subtitle_translation.mp4 |
tts.mp4 |
agi.mp4 |
Input languages: Chinese, English, Japanese, German, Turkish supported (more languages being added)
Translation languages: 56 languages supported, including English, Chinese, Russian, Spanish, French, etc.
First, download the Release executable file that matches your device's system. Follow the instructions below to choose between the desktop or non-desktop version, then place the software in an empty folder. Running the program will generate some directories, so keeping it in an empty folder makes management easier.
[For the desktop version (release files with "desktop" in the name), refer here]
The desktop version is newly released to address the difficulty beginners face in editing configuration files correctly. It still has some bugs and is being continuously updated.
Double-click the file to start using it.
[For the non-desktop version (release files without "desktop" in the name), refer here]
The non-desktop version is the original release, with more complex configuration but stable functionality. It is also suitable for server deployment, as it provides a web-based UI.
Create a config
folder in the directory, then create a config.toml
file inside it. Copy the contents of the config-example.toml
file from the source code's config
directory into your config.toml
and fill in your configuration details. (If you want to use OpenAI models but don’t know how to get a key, you can join the group for free trial access.)
Double-click the executable or run it in the terminal to start the service.
Open your browser and enter http://127.0.0.1:8888 to begin using it. (Replace 8888 with the port number you specified in the config file.)
[For the desktop version, i.e., release files with "desktop" in the name, refer here]
The current packaging method for the desktop version cannot support direct double-click execution or DMG installation due to signing issues. Manual trust configuration is required as follows:
Open the directory containing the executable file (assuming the filename is KrillinAI_1.0.0_desktop_macOS_arm64) in Terminal
Execute the following commands sequentially:
sudo xattr -cr ./KrillinAI_1.0.0_desktop_macOS_arm64
sudo chmod +x ./KrillinAI_1.0.0_desktop_macOS_arm64
./KrillinAI_1.0.0_desktop_macOS_arm64
[For the non-desktop version, i.e., release files without "desktop" in the name, refer here]
This software is not signed, so after completing the file configuration in the "Basic Steps," you will need to manually trust the application on macOS. Follow these steps:
KrillinAI_1.0.0_macOS_arm64
) is located.sudo xattr -rd com.apple.quarantine ./KrillinAI_1.0.0_macOS_arm64
sudo chmod +x ./KrillinAI_1.0.0_macOS_arm64
./KrillinAI_1.0.0_macOS_arm64
This will start the service.
This project supports Docker deployment. Please refer to the Docker Deployment Instructions.
If you encounter video download failures, please refer to the Cookie Configuration Instructions to configure your cookie information.
The quickest and most convenient configuration method:
openai
for both transcription_provider
and llm_provider
. In this way, you only need to fill in openai.apikey
in the following three major configuration item categories, namely openai
, local_model
, and aliyun
, and then you can conduct subtitle translation. (Fill in app.proxy
, model
and openai.base_url
as per your own situation.)The configuration method for using the local speech recognition model (macOS is not supported for the time being) (a choice that takes into account cost, speed, and quality):
fasterwhisper
for transcription_provider
and openai
for llm_provider
. In this way, you only need to fill in openai.apikey
and local_model.faster_whisper
in the following three major configuration item categories, namely openai
and local_model
, and then you can conduct subtitle translation. The local model will be downloaded automatically. (The same applies to app.proxy
and openai.base_url
as mentioned above.)The following usage situations require the configuration of Alibaba Cloud:
llm_provider
is filled with aliyun
, it indicates that the large model service of Alibaba Cloud will be used. Consequently, the configuration of the aliyun.bailian
item needs to be set up.transcription_provider
is filled with aliyun
, or if the "voice dubbing" function is enabled when starting a task, the voice service of Alibaba Cloud will be utilized. Therefore, the configuration of the aliyun.speech
item needs to be filled in.aliyun.oss
item needs to be filled in.
Configuration Guide: Alibaba Cloud Configuration InstructionsPlease refer to Frequently Asked Questions
.vscode
, .idea
, etc. Please make good use of .gitignore
to filter them.config.toml
; instead, submit config-example.toml
.