docker / genai-stack
- воскресенье, 8 октября 2023 г. в 00:00:05
Langchain + Docker + Neo4j
This GenAI application stack will get you started building your own GenAI application in no time. The demo applications can serve as inspiration or as a starting point.
Create a .env
file from the environment template file env.example
MacOS and Linux users can use any LLM that's available via Ollama. Check the "tags" section under the model page you want to use on https://ollama.ai/library and write the tag for the value of the environment variable LLM=
in th e.env
file.
All platforms can use GPT-3.5-turbo and GPT-4 (bring your own API keys for OpenAIs models).
MacOS
Install Ollama in MacOS and start it before running docker compose up
.
Linux
No need to install Ollama manually, it will run in a container as
part of the stack when running with the Linus profile: run docker compose --profile linux up
.
Windows
Not supported by Ollama, so Windows users need to generate a OpenAI API key and configure the stack to use gpt-3.5
or gpt-4
in the .env
file.
Warning
There is a performance issue that impacts python applications in the latest release of Docker Desktop. Until a fix is available, please use version 4.23.0
or earlier.
To start everything
docker compose up
If changes to build scripts has been made, rebuild
docker compose up --build
To enter watch mode (auto rebuild on file changes). First start everything, then in new terminal:
docker compose alpha watch
Shutdown Is health check fails or containers doesn't start up as expected, shutdown completely to start up again.
docker compose down
UI: http://localhost:8501 DB client: http://localhost:7474
(Chat input + RAG mode selector)
(CTA to auto generate support ticket draft)
(UI of the auto generated support ticket draft)
UI: http://localhost:8502 DB client: http://localhost:7474
UI: http://localhost:8503
DB client: http://localhost:7474
This application let's you load a local PDF into text chunks and embed it into Neo4j so you can ask questions about its contents and have the LLM answer them using vector similarity search.