jtsang4 / claude-to-chatgpt
- среда, 17 мая 2023 г. в 00:00:26
This project converts the API of Anthropic's Claude model to the OpenAI Chat API format.
This project converts the API of Anthropic's Claude model to the OpenAI Chat API format.
claude-v1.3
, claude-v1.3-100k
modelsYou can run this project using Cloudflare Workers or Docker:
By using Cloudflare Workers, you don't need a server to deploy this project.
cloudflare-worker.js
to Cloudflare Worker "Quick Edit" EditorThe Cloudfalre Workers support 100k requests a day, If you need to call more than that, you can use Docker to deploy as below.
docker run -p 8000:8000 wtzeng/claude-to-chatgpt:latest
docker-compose up
The API will then be available at http://localhost:8000. API endpoint: /v1/chat/completions
When you input the model parameter as gpt-3.5-turbo
or gpt-3.5-turbo-0301
, it will be substituted with claude-v1.3
. otherwise, claude-v1.3-100k
will be utilized.
Here are some recommended GUI software that supports this project:
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $CLAUDE_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
}'
The Claude Completion API has an endpoint /v1/complete
which takes the following JSON request:
{
"prompt": "\n\nHuman: Hello, AI.\n\nAssistant: ",
"model": "claude-v1.3",
"max_tokens_to_sample": 100,
"temperature": 1,
"stream": true
}
And returns JSON with choices and completions.
The OpenAI Chat API has a similar /v1/chat/completions
endpoint which takes:
{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Hello, AI."
}
],
"max_tokens": 100,
"temperature": 1,
"stream": true
}
And returns JSON with a response string.
This project converts between these two APIs, get completions from the Claude model and formatting them as OpenAI Chat responses.
This project is licensed under the MIT License - see the LICENSE file for details.