langchain4j / langchain4j
- пятница, 8 марта 2024 г. в 00:00:09
Java version of LangChain
Welcome!
The goal of LangChain4j is to simplify integrating AI/LLM capabilities into Java applications.
Here's how:
LangChain4j began development in early 2023 amid the ChatGPT hype. We noticed a lack of Java counterparts to the numerous Python and JavaScript LLM libraries and frameworks, and we had to fix that! Although "LangChain" is in our name, the project is a fusion of ideas and concepts from LangChain, Haystack, LlamaIndex, and the broader community, spiced up with a touch of our own innovation.
We actively monitor community developments, aiming to quickly incorporate new techniques and integrations, ensuring you stay up-to-date. The library is under active development. While some features from the Python version of LangChain are still being worked on, the core functionality is in place, allowing you to start building LLM-powered apps now!
For easier integration, LangChain4j also includes integration with Quarkus (extension) and Spring Boot (starters).
Please see examples of how LangChain4j can be used in langchain4j-examples repo:
Documentation can be found here.
Tutorials can be found here.
LangChain4j features a modular design, comprising:
langchain4j-core
module, which defines core abstractions (such as ChatLanguageModel
and EmbeddingStore
) and their APIs.langchain4j
module, containing useful tools like ChatMemory
, OutputParser
as well as a high-level features like AiServices
.langchain4j-{integration}
modules, each providing integration with various LLM providers and embedding stores into LangChain4j.
You can use the langchain4j-{integration}
modules independently. For additional features, simply import the main langchain4j
dependency.30 January:
22 December:
12 November:
29 September:
Response<T>
instead of T
. Response<T>
contains token usage and finish reason.29 August:
19 August:
10 August:
26 July:
21 July:
17 July:
gpt-3.5-turbo
and text-embedding-ada-002
models with LangChain4j for free, without needing an OpenAI account and keys! Simply use the API key "demo".15 July:
11 July:
5 July:
{{current_date}}
, {{current_time}}
and {{current_date_time}}
placeholders.3 July:
2 July:
1 July:
You can define declarative "AI Services" that are powered by LLMs:
interface Assistant {
String chat(String userMessage);
}
Assistant assistant = AiServices.create(Assistant.class, model);
String answer = assistant.chat("Hello");
System.out.println(answer);
// Hello! How can I assist you today?
You can use LLM as a classifier:
enum Sentiment {
POSITIVE, NEUTRAL, NEGATIVE
}
interface SentimentAnalyzer {
@UserMessage("Analyze sentiment of {{it}}")
Sentiment analyzeSentimentOf(String text);
@UserMessage("Does {{it}} have a positive sentiment?")
boolean isPositive(String text);
}
SentimentAnalyzer sentimentAnalyzer = AiServices.create(SentimentAnalyzer.class, model);
Sentiment sentiment = sentimentAnalyzer.analyzeSentimentOf("It is good!");
// POSITIVE
boolean positive = sentimentAnalyzer.isPositive("It is bad!");
// false
You can easily extract structured information from unstructured data:
class Person {
private String firstName;
private String lastName;
private LocalDate birthDate;
}
interface PersonExtractor {
@UserMessage("Extract information about a person from {{text}}")
Person extractPersonFrom(@V("text") String text);
}
PersonExtractor extractor = AiServices.create(PersonExtractor.class, model);
String text = "In 1968, amidst the fading echoes of Independence Day, "
+ "a child named John arrived under the calm evening sky. "
+ "This newborn, bearing the surname Doe, marked the start of a new journey.";
Person person = extractor.extractPersonFrom(text);
// Person { firstName = "John", lastName = "Doe", birthDate = 1968-07-04 }
You can provide tools that LLMs can use! It can be anything: retrieve information from DB, call APIs, etc. See example here.
Add LangChain4j OpenAI dependency to your project:
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai</artifactId>
<version>0.27.1</version>
</dependency>
implementation 'dev.langchain4j:langchain4j-open-ai:0.27.1'
Import your OpenAI API key:
String apiKey = System.getenv("OPENAI_API_KEY");
You can also use the API key demo
to test OpenAI, which we provide for free.
How to get an API key?
Create an instance of a model and start interacting:
OpenAiChatModel model = OpenAiChatModel.withApiKey(apiKey);
String answer = model.generate("Hello world!");
System.out.println(answer); // Hello! How can I assist you today?
Provider | Native Image | Completion | Streaming | Async Completion | Async Streaming | Embedding | Image Generation | ReRanking |
---|---|---|---|---|---|---|---|---|
OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ||
Azure OpenAI | ✅ | ✅ | ✅ | ✅ | ||||
Hugging Face | ✅ | ✅ | ✅ | |||||
Amazon Bedrock | ✅ | ✅ | ||||||
Google Vertex AI Gemini | ✅ | ✅ | ✅ | ✅ | ||||
Google Vertex AI | ✅ | ✅ | ✅ | ✅ | ✅ | |||
Mistral AI | ✅ | ✅ | ✅ | ✅ | ✅ | |||
DashScope | ✅ | ✅ | ✅ | ✅ | ||||
LocalAI | ✅ | ✅ | ✅ | ✅ | ||||
Ollama | ✅ | ✅ | ✅ | ✅ | ✅ | |||
Cohere | ✅ | |||||||
Qianfan | ✅ | ✅ | ✅ | ✅ | ✅ | |||
ChatGLM | ✅ | |||||||
Nomic | ✅ |
Please note that the library is in active development and:
Please let us know what features you need!
Please help us make this open-source library better by contributing.
Some guidelines:
You might ask why would I need all of this? Here are a couple of examples:
We highly recommend watching this amazing 90-minute tutorial on prompt engineering best practices, presented by Andrew Ng (DeepLearning.AI) and Isa Fulford (OpenAI). This course will teach you how to use LLMs efficiently and achieve the best possible results. Good investment of your time!
Here are some best practices for using LLMs:
You will need an API key from OpenAI (paid) or HuggingFace (free) to use LLMs hosted by them.
We recommend using OpenAI LLMs (gpt-3.5-turbo
and gpt-4
) as they are by far the most capable and are reasonably priced.
It will cost approximately $0.01 to generate 10 pages (A4 format) of text with gpt-3.5-turbo
. With gpt-4
, the cost will be $0.30 to generate the same amount of text. However, for some use cases, this higher cost may be justified.
For embeddings, we recommend using one of the models from the HuggingFace MTEB leaderboard. You'll have to find the best one for your specific use case.
Here's how to get a HuggingFace API key: