cloudwego / eino
- четверг, 6 марта 2025 г. в 00:00:04
The ultimate LLM/AI application development framework in Golang.
English | 中文
Eino['aino] (pronounced similarly to "I know") aims to be the ultimate LLM application development framework in Golang. Drawing inspirations from many excellent LLM application development frameworks in the open-source community such as LangChain & LlamaIndex, etc., as well as learning from cutting-edge research and real world applications, Eino offers an LLM application development framework that emphasizes on simplicity, scalability, reliability and effectiveness that better aligns with Golang programming conventions.
What Eino provides are:
With the above arsenal, Eino can standardize, simplify, and improve efficiency at different stages of the AI application development cycle:
Use a component directly:
model, _ := openai.NewChatModel(ctx, config) // create an invokable LLM instance
message, _ := model.Generate(ctx, []*Message{
SystemMessage("you are a helpful assistant."),
UserMessage("what does the future AI App look like?")})
Of course, you can do that, Eino provides lots of useful components to use out of the box. But you can do more by using orchestration, for three reasons:
Eino provides two set of APIs for orchestration
API | Characteristics and usage |
---|---|
Chain | Simple chained directed graph that can only go forward. |
Graph | Cyclic or Acyclic directed graph. Powerful and flexible. |
Let's create a simple chain: a ChatTemplate followed by a ChatModel.
chain, _ := NewChain[map[string]any, *Message]().
AppendChatTemplate(prompt).
AppendChatModel(model).
Compile(ctx)
chain.Invoke(ctx, map[string]any{"query": "what's your name?"})
Now let's create a graph that uses a ChatModel to generate answer or tool calls, then uses a ToolsNode to execute those tools if needed.
graph := NewGraph[map[string]any, *schema.Message]()
_ = graph.AddChatTemplateNode("node_template", chatTpl)
_ = graph.AddChatModelNode("node_model", chatModel)
_ = graph.AddToolsNode("node_tools", toolsNode)
_ = graph.AddLambdaNode("node_converter", takeOne)
_ = graph.AddEdge(START, "node_template")
_ = graph.AddEdge("node_template", "node_model")
_ = graph.AddBranch("node_model", branch)
_ = graph.AddEdge("node_tools", "node_converter")
_ = graph.AddEdge("node_converter", END)
compiledGraph, err := graph.Compile(ctx)
if err != nil {
return err
}
out, err := r.Invoke(ctx, map[string]any{"query":"Beijing's weather this weekend"})
Now let's create a 'ReAct' agent: A ChatModel binds to Tools. It receives input Messages and decides independently whether to call the Tool or output the final result. The execution result of the Tool will again become the input Message for the ChatModel and serve as the context for the next round of independent judgment.
We provide a complete implementation for ReAct Agent out of the box in the flow
package. Check out the code here: flow/agent/react
Our implementation of ReAct Agent uses Eino's graph orchestration exclusively, which provides the following benefits out of the box:
For example, you could easily extend the compiled graph with callbacks:
handler := NewHandlerBuilder().
OnStartFn(
func(ctx context.Context, info *RunInfo, input CallbackInput) context.Context) {
log.Infof("onStart, runInfo: %v, input: %v", info, input)
}).
OnEndFn(
func(ctx context.Context, info *RunInfo, output CallbackOutput) context.Context) {
log.Infof("onEnd, runInfo: %v, out: %v", info, output)
}).
Build()
compiledGraph.Invoke(ctx, input, WithCallbacks(handler))
or you could easily assign options to different nodes:
// assign to All nodes
compiledGraph.Invoke(ctx, input, WithCallbacks(handler))
// assign only to ChatModel nodes
compiledGraph.Invoke(ctx, input, WithChatModelOption(WithTemperature(0.5))
// assign only to node_1
compiledGraph.Invoke(ctx, input, WithCallbacks(handler).DesignateNode("node_1"))
Encapsulates common building blocks into component abstractions, each have multiple component implementations that are ready to be used out of the box.
Implementations can be nested and captures complex business logic.
Streaming Paradigm | Explanation |
---|---|
Invoke | Accepts non-stream type I and returns non-stream type O |
Stream | Accepts non-stream type I and returns stream type StreamReader[O] |
Collect | Accepts stream type StreamReader[I] and returns non-stream type O |
Transform | Accepts stream type StreamReader[I] and returns stream type StreamReader[O] |
The Eino framework consists of several parts:
Eino(this repo): Contains Eino's type definitions, streaming mechanism, component abstractions, orchestration capabilities, aspect mechanisms, etc.
EinoExt: Component implementations, callback handlers implementations, component usage examples, and various tools such as evaluators, prompt optimizers.
Eino Devops: visualized developing, visualized debugging etc.
EinoExamples is the repo containing example applications and best practices for Eino.
For learning and using Eino, we provide a comprehensive Eino User Manual to help you quickly understand the concepts in Eino and master the skills of developing AI applications based on Eino. Start exploring through the Eino User Manual now!
For a quick introduction to building AI applications with Eino, we recommend starting with Eino: Quick Start
If you discover a potential security issue in this project, or think you may have discovered a security issue, we ask that you notify Bytedance Security via our security center or vulnerability reporting email.
Please do not create a public GitHub issue.
This project is licensed under the Apache-2.0 License.