cohere-ai / cohere-toolkit
- суббота, 27 апреля 2024 г. в 00:00:04
Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.
Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.
You can deploy Toolkit with one click to Microsoft Azure Platform:
Clone the repo and run
make setup
Follow the instructions to configure the model - either AWS Sagemaker, Azure, or Cohere's platform. This can also be done by running make setup
(See Option 2 below), which will help generate a file for you, or by manually creating a .env
file and copying the contents of the provided .env-template
. Then replacing the values with the correct ones.
COHERE_API_KEY
: If your application will interface with Cohere's API, you will need to supply an API key. Not required if using AWS Sagemaker or Azure.
Sign up at https://dashboard.cohere.com/ to create an API key.NEXT_PUBLIC_API_HOSTNAME
: The backend URL which the frontend will communicate with. Defaults to http://localhost:8000DATABASE_URL
: Your PostgreSQL database connection string for SQLAlchemy, should follow the format postgresql+psycopg2://USER:PASSWORD@HOST:PORT
.To use the toolkit with AWS Sagemaker you will first need the cohere model (a command version) which powers chat deployed in Sagemaker. Follow Cohere's guide and notebooks to deploy a command model and create an endpoint which can then be used with the toolkit.
Then you will need to set up authorization, see more details here. The default toolkit set up uses the configuration file (after aws configure sso
) with the following environment variables:
SAGE_MAKER_REGION_NAME
: The region you configured for the model.SAGE_MAKER_ENDPOINT_NAME
: The name of the endpoint which you created in the notebook.SAGE_MAKER_PROFILE_NAME
: Your AWS profile namePYTHON_INTERPRETER_URL
: URL to the python interpreter container. Defaults to http://localhost:8080.TAVILY_API_KEY
: If you want to enable internet search, you will need to supply a Tavily API Key. Not required.Once your environment variables are set, you're ready to deploy the Toolkit locally! Pull the Docker images from Github Artifact registry or build files from source. See the Makefile
for all available commands.
Requirements:
Ensure your shell is authenticated with GHCR.
Pull the Single Container Image from Github's Artifact Registry
docker pull ghcr.io/cohere-ai/cohere-toolkit:latest
Run the images locally:
docker run --name=cohere-toolkit -itd -e COHERE_API_KEY='Your Cohere API key here' -p 8000:8000 -p 4000:4000 ghcr.io/cohere-ai/cohere-toolkit
Run make first-run
to start the CLI, that will generate a .env
file for you. This will also run all the DB migrations and run the containers
make first-run
Run make setup
to start the CLI, that will generate a .env
file for you:
make setup
Then run:
make migrate
make dev
If you did not change the default port, visit http://localhost:4000/ in your browser to chat with the model.
Components in this repo include:
src/interfaces/coral_web
- A web app built in Next.js. Includes a simple SQL database out of the box to store conversation history in the app.src/backend
- Contains preconfigured data sources and retrieval code to set up RAG on custom data sources (called "Retrieval Chains"). Users can also configure which model to use, selecting from Cohere's models hosted on either Cohere's platform, Azure, and AWS Sagemaker. By default, we have configured a Langchain data retriever to test RAG on Wikipedia and your own uploaded documents.Looking to deploy the Toolkit to your preferred cloud service provider? See our guides below:
Use for configuring and adding new retrieval chains.
Install your dependencies:
poetry install
Run linters:
poetry run black .
poetry run isort .
The docker-compose file should spin up a local db
container with a PostgreSQL server. The first time you setup this project, and whenever new migrations are added, you will need to run:
make migrate
This will apply all existing database migrations and ensure your DB schema is up to date.
If ever you run into issues with Alembic, such as being out of sync and your DB does not contain any data you'd like to preserve, you can run:
make reset-db
make migrate
make dev
This will delete the existing db
container volumes, restart the containers and reapply all migrations.
Run:
make dev
To spin the test_db
service for you. After, you can run:
make run-tests
When making changes to any of the database models, such as adding new tables, modifying or removing columns, you will need to create a new Alembic migration. You can use the following Make command:
make migration
Important: If adding a new table, make sure to add the import to the model/__init__.py
file! This will allow Alembic to import the models and generate migrations accordingly.
This should generate a migration on the Docker container and be copied to your local /alembic
folder. Make sure the new migration gets created.
Then you can migrate the changes to the PostgreSQL Docker instance using:
make migrate
Make sure you run the following command before running make dev:
make migrate
To debug any of the backend logic while the Docker containers are running, you can run:
make dev
This will run the Docker containers with reloading enabled, then in a separate shell window, run:
make attach
This will attach an interactive shell to the backend running, now when your backend code hits any
import pdb; pdb.set_trace()
it will allow you to debug.
A model deployment is a running version of one of the Cohere command models. The Toolkit currently supports the model deployments:
/v1
to the end of the url.BaseDeployment
similar to the other deployments.It is possible to just run the backend service, and call it in the same manner as the Cohere API. Note streaming and non streaming endpoints are split into 'http://localhost:8000/chat-stream' and 'http://localhost:8000/chat' compared to the API. For example, to stream:
curl --location 'http://localhost:8000/chat-stream' \
--header 'User-Id: me' \
--header 'Content-Type: application/json' \
--data '{
"message": "Tell me about the aya model"
}
'
Currently the core chat interface is the Coral frontend. To add your own interface, take the steps above for call the backend as an API in your implementation and add it alongside src/interfaces/coral_web
.
If you have already created a connector, it can be used in the toolkit with ConnectorRetriever
. Add in your configuration and then add the definition in config/tools.py similar to Arxiv
implementation with the category Category.DataLoader
. You can now use the Coral frontend and API with the connector.
To use Coral with web search, simply use the Tavily_Internet_Search
tool by adding your API key to the env file. Alternatively you can use any search provider of your choosing, either with your own implementation or an integration implementation (such as LangChain) by following these steps below.
To use Coral with document upload, simply use the File_Upload_LlamaIndex
or File_Upload_Langchain
(this needs a cohere API key in the .env file) tool or by adding your API key to the env file. Alternatively you can use any document uploader of your choosing, either with your own implementation or an integration implementation (such as LangChain) by following these steps below.
Toolkit includes some sample tools that you can copy to configure your own data sources:
To create your own tools or add custom data sources, see our guide: tools and retrieval sources overview
Please note that these are experimental features.
Chatting with multihop tool usage through Langchain is enabled by setting experimental feature flag to True in .env
.
USE_EXPERIMENTAL_LANGCHAIN=True
By setting this flag to true, only tools that have a Langchain implementation can be utilized.
These exist under LANGCHAIN_TOOLS
and require a to_lanchain_tool()
function on the tool implementation which returns a langchain compatible tool.
Python interpreter and Tavily Internet search are provided in the toolkit by default once the environment is set up.
Example API call:
curl --location 'http://localhost:8000/langchain-chat' \
--header 'User-Id: me' \
--header 'Content-Type: application/json' \
--data '{
"message": "Tell me about the aya model",
"tools": [{"name": "Python_Interpreter"},{"name": "Internet Search"},]
}'
Currently, citations are not supported in lanchain multihop.
Contributions are what drive an open source community, any contributions made are greatly appreciated. To get started, check out our documentation.