sunner / ChatALL
- среда, 17 мая 2023 г. в 00:00:07
Concurrently chat with ChatGPT, Bing Chat, bard, Alpaca, Vincuna, Claude, ChatGLM, MOSS, iFlytek Spark, ERNIE and more, discover the best answers
Large Language Models (LLMs) based AI bots are amazing. However, their behavior can be random and different bots excel at different tasks. If you want the best experience, don't try them one by one. ChatALL (Chinese name: 齐叨) can send prompt to severl AI bots concurrently, help you to discover the best results.
AI Bots | Web Access | API |
---|---|---|
ChatGPT | Yes | Yes |
Bing Chat | Yes | No API |
Baidu ERNIE | No | Yes |
Bard | Yes | No API |
Poe | Coming soon | Coming soon |
MOSS | Yes | No API |
Tongyi Qianwen | Coming soon | Coming soon |
Dedao Learning Assistant | Coming soon | No API |
iFLYTEK SPARK | Yes | Coming soon |
Alpaca | Yes | No API |
Vicuna | Yes | No API |
ChatGLM | Yes | No API |
Claude | Yes | No API |
Gradio for Hugging Face space/self-deployed models | Yes | No API |
HuggingChat | Yes | No API |
And more...
ChatALL is a client, not a proxy. Therefore, you must:
Download from https://github.com/sunner/ChatALL/releases
Just download the *-win.exe file and proceed with the setup.
For Apple Silicon Mac (M1, M2 CPU), download the *-mac-arm64.dmg file.
For other Macs, download *-mac-x64.dmg file.
Download the .AppImage file, make it executable, and enjoy the click-to-run experience.
The guide may help you.
npm install
npm run electron:serve
Build for your current platform:
npm run electron:build
Build for all platforms:
npm run electron:build -- -wml --x64 --arm64