rashadphz / farfalle
- ΠΏΠΎΠ½Π΅Π΄Π΅Π»ΡΠ½ΠΈΠΊ, 20 ΠΌΠ°Ρ 2024β―Π³. Π² 00:00:04
open-source answer engine - run local or cloud models
Open-source AI-powered search engine. Run your own local LLM or use the cloud.
Demo answering questions with llama3 on my M1 Macbook Pro:
farfalle.dev (Cloud models only)
ollama serve
git clone git@github.com:rashadphz/farfalle.git
cd farfalle
touch .env
Add the following variables to the .env file:
TAVILY_API_KEY=...
# Cloud Models
OPENAI_API_KEY=...
GROQ_API_KEY=...
# Rate Limit
RATE_LIMIT_ENABLED=
REDIS_URL=...
# Logging
LOGFIRE_TOKEN=...
# API URL
NEXT_PUBLIC_API_URL=http://localhost:8000
# Local Models
NEXT_PUBLIC_LOCAL_MODE_ENABLED=true
ENABLE_LOCAL_MODELS=True
This requires Docker Compose version 2.22.0 or later.
docker-compose -f docker-compose.dev.yaml up -d
Visit http://localhost:3000 to view the app.
After the backend is deployed, copy the web service URL to your clipboard. It should look something like: https://some-service-name.onrender.com.
Use the copied backend URL in the NEXT_PUBLIC_API_URL
environment variable when deploying with Vercel.
And you're done! π₯³