a16z-infra / ai-getting-started
- суббота, 24 июня 2023 г. в 00:00:02
A Javascript AI getting started stack for weekend projects, including image/text models, vector stores, auth, and deployment configs
Live Demo (deployed on fly.io)
The simplest way to try out this stack is to test it out locally and traverse through code files to understand how each component work. Here are the steps to get started.
Fork the repo to your Github account, then run the following command to clone the repo:
git clone git@github.com:[YOUR_GITHUB_ACCOUNT_NAME]/ai-getting-started.git
cd ai-getting-started
npm install
cp .env.local.example .env.local
a. Clerk Secrets
Go to https://dashboard.clerk.com/ -> "Add Application" -> Fill in Application name/select how your users should sign in -> Create Application
Now you should see both NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
and CLERK_SECRET_KEY
on the screen
b. OpenAI API key
Visit https://platform.openai.com/account/api-keys to get your OpenAI API key
c. Replicate API key
Visit https://replicate.com/account/api-tokens to get your Replicate API key
NOTE: By default, this template uses Pinecone as vector store, but you can turn on Supabase pgvector easily. This means you only need to fill out either Pinecone API key or Supabase API key.
d. Pinecone API key
PINECONE_INDEX
)1536
PINECONE_ENVIRONMENT
variable, and "Value" to PINECONE_API_KEY
e. Supabase API key
SUPABASE_URL
is the URL value under "Project URL"SUPABASE_PRIVATE_KEY
is the key starts with ey
under Project API KeysThere are a few markdown files under /blogs
directory as examples so you can do Q&A on them. To generate embeddings and store them in the vector database for future queries, you can run the following command:
Run the following command to generate embeddings and store them in Pinecone:
npm run generate-embeddings-pinecone
In QAModel.tsx
, replace /api/qa-pinecone
with /api/qa-pg-vector
. Then run the following command to generate embeddings and store them in Supabase pgvector:
npm run generate-embeddings-supabase
Now you are ready to test out the app locally! To do this, simply run npm run dev
under the project root.
fly launch
under project root -- this will generate a fly.toml
that includes all the configurations you will needfly deploy -ha=false
to deploy the app -- the -ha flag makes sure fly only spins up one instance, which is included in the free plan. You also want to run fly scale memory 512
to scale up the fly vm memory for this app.cat .env.local | fly secrets import
.env.prod
locally and fill in all the production-environment secrets. Remember to update NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
and CLERK_SECRET_KEY
by copying secrets from Clerk's production instance
-cat .env.prod | fly secrets import
to upload secretsYou can fork this repo, make changes, and create a PR. Add @ykhli or @timqian as reviewers.
If you are new to contributing on github, here is a step-by-step guide:
Fork
on the top right of this pageFeel free to open feature requests, bug reports etc under Issues.