π¬DeepChat - A smart assistant that connects powerful AI to your personal world
DeepChat - Powerful Open-Source Multi-Model AI Chat Platform
DeepChat is a feature-rich open-source AI chat platform supporting multiple cloud and local large language models with powerful search enhancement and tool calling capabilities.
DeepChat is a powerful open-source AI chat platform providing a unified interface for interacting with various large language models. Whether you're using cloud APIs like OpenAI, Gemini, Anthropic, or locally deployed Ollama models, DeepChat delivers a smooth user experience.
As a cross-platform AI assistant application, DeepChat not only supports basic chat functionality but also offers advanced features such as search enhancement, tool calling, and multimodal interaction, making AI capabilities more accessible and efficient.
π‘ Why Choose DeepChat
Compared to other AI tools, DeepChat offers the following unique advantages:
Unified Multi-Model Management: One application supports almost all mainstream LLMs, eliminating the need to switch between multiple apps
Seamless Local Model Integration: Built-in Ollama support allows you to manage and use local models without command-line operations
Advanced Tool Calling: Built-in MCP support enables code execution, web access, and other tools without additional configuration
Powerful Search Enhancement: Support for multiple search engines makes AI responses more accurate and timely, providing non-standard web search paradigms that can be quickly customized
Privacy-Focused: Local data storage and network proxy support reduce the risk of information leakage
Business-Friendly: Embraces open source under the Apache License 2.0, suitable for both commercial and personal use
π₯ Main Features
π Multiple Cloud LLM Provider Support: DeepSeek, OpenAI, SiliconFlow, Grok, Gemini, Anthropic, and more
π Local Model Deployment Support:
Integrated Ollama with comprehensive management capabilities
Control and manage Ollama model downloads, deployments, and runs without command-line operations
π Rich and Easy-to-Use Chat Capabilities
Complete Markdown rendering with code block rendering based on industry-leading CodeMirror
Multi-window + multi-tab architecture supporting parallel multi-session operations across all dimensions, use large models like using a browser, non-blocking experience brings excellent efficiency
Supports Artifacts rendering for diverse result presentation, significantly saving token consumption after MCP integration
Messages support retry to generate multiple variations; conversations can be forked freely, ensuring there's always a suitable line of thought
Supports rendering images, Mermaid diagrams, and other multi-modal content; supports GPT-4o, Gemini, Grok text-to-image capabilities
Supports highlighting external information sources like search results within the content
π Robust Search Extension Capabilities
Built-in integration with leading search APIs like BoSearch, Brave Search via MCP mode, allowing the model to intelligently decide when to search
Supports mainstream search engines like Google, Bing, Baidu, and Sogou Official Accounts search by simulating user web browsing, enabling the LLM to read search engines like a human
Supports reading any search engine; simply configure a search assistant model to connect various search sources, whether internal networks, API-less engines, or vertical domain search engines, as information sources for the model
π§ Excellent MCP (Model Context Protocol) Support
Complete support for the three core capabilities of Resources/Prompts/Tools in the MCP protocol
Supports semantic workflows, enabling more complex and intelligent automation by understanding the meaning and context of tasks.
Extremely user-friendly configuration interface
Aesthetically pleasing and clear tool call display
Detailed tool call debugging window with automatic formatting of tool parameters and return data
Built-in Node.js runtime environment; npx/node-like services require no extra configuration and work out-of-the-box
Supports inMemory services with built-in utilities like code execution, web information retrieval, and file operations; ready for most common use cases out-of-the-box without secondary installation
Converts visual model capabilities into universally usable functions for any model via the built-in MCP service
π» Multi-Platform Support: Windows, macOS, Linux
π¨ Beautiful and User-Friendly Interface, user-oriented design, meticulously themed light and dark modes
π Rich DeepLink Support: Initiate conversations via links for seamless integration with other applications. Also supports one-click installation of MCP services for simplicity and speed
π Security-First Design: Chat data and configuration data have reserved encryption interfaces and code obfuscation capabilities
π‘οΈ Privacy Protection: Supports screen projection hiding, network proxies, and other privacy protection methods to reduce the risk of information leakage
π° Business-Friendly:
Embraces open source, based on the Apache License 2.0 protocol, enterprise use without worry
Enterprise integration requires only minimal configuration code changes to use reserved encrypted obfuscation security capabilities
Clear code structure, both model providers and MCP services are highly decoupled, can be freely customized with minimal cost
Reasonable architecture, data interaction and UI behavior separation, fully utilizing Electron's capabilities, rejecting simple web wrappers, excellent performance
For more details on how to use these features, see the User Guide.
Windows and Linux are packaged by GitHub Action.
For Mac-related signing and packaging, please refer to the Mac Release Guide.
Install Dependencies
$ pnpm install
$ pnpm run installRuntime
# if got err: No module named 'distutils'
$ pip install setuptools
For Windows: To allow non-admin users to create symlinks and hardlinks, enable Developer Mode in Settings or use an administrator account. Otherwise pnpm ops will fail.
Start Development
$ pnpm run dev
Build
# For Windows
$ pnpm run build:win
# For macOS
$ pnpm run build:mac
# For Linux
$ pnpm run build:linux
# Specify architecture packaging
$ pnpm run build:win:x64
$ pnpm run build:win:arm64
$ pnpm run build:mac:x64
$ pnpm run build:mac:arm64
$ pnpm run build:linux:x64
$ pnpm run build:linux:arm64
For a more detailed guide on development, project structure, and architecture, please see the Developer Guide.
π₯ Community & Contribution
DeepChat is an active open-source community project, and we welcome various forms of contribution: