Doriandarko / maestro
- суббота, 27 апреля 2024 г. в 00:00:07
A framework for Claude Opus to intelligently orchestrate subagents.
This Python script demonstrates an AI-assisted task breakdown and execution workflow using the Anthropic API. It utilizes two AI models, Opus and Haiku, to break down an objective into sub-tasks, execute each sub-task, and refine the results into a cohesive final output.
Mestro now runs locally thanks to the Ollama platform. Experience the power of Llama 3 locally!
Before running the script
Install Ollama client from here https://ollama.com/download
then
pip install ollama
And
ollama.pull('llama3:70b')
This will depend on the model you want to use it, you only need to do it once or if you want to update the model when a new version it's out. In the script I am using both versions but you can customize the model you want to use
ollama.pull('llama3:70b') ollama.pull('llama3:8b')
Then
python maestro-ollama.py
pip install groq
Then
python maestro-groq.py
Now, when it's creating a task for its subagent, Claude Opus will perform a search and get the best answer to help the subagent solve that task even better.
Make sure you replace your Tavil API for search to work
Get one here https://tavily.com/
Add support for GPT-4 as an orchestrator in maestro-gpt.py Simply
python maestro-gpt.py
After you complete your installs.
To run this script, you need to have the following:
anthropic
and rich
pip install -r requirements.txt
client = Anthropic(api_key="YOUR_API_KEY_HERE")
If using search, replace your Tavil API
tavily = TavilyClient(api_key="YOUR API KEY HERE")
python maestro.py
Please enter your objective: Your objective here
The script will start the task breakdown and execution process. It will display the progress and results in the console using formatted panels.
Once the process is complete, the script will display the refined final output and save the full exchange log to a Markdown file with a filename based on the objective.
The script consists of the following main functions:
opus_orchestrator(objective, previous_results=None)
: Calls the Opus model to break down the objective into sub-tasks or provide the final output. It uses an improved prompt to assess task completion and includes the phrase "The task is complete:" when the objective is fully achieved.haiku_sub_agent(prompt, previous_haiku_tasks=None)
: Calls the Haiku model to execute a sub-task prompt, providing it with the memory of previous sub-tasks.opus_refine(objective, sub_task_results)
: Calls the Opus model to review and refine the sub-task results into a cohesive final output.The script follows an iterative process, repeatedly calling the opus_orchestrator function to break down the objective into sub-tasks until the final output is provided. Each sub-task is then executed by the haiku_sub_agent function, and the results are stored in the task_exchanges and haiku_tasks lists.
The loop terminates when the Opus model includes the phrase "The task is complete:" in its response, indicating that the objective has been fully achieved.
Finally, the opus_refine function is called to review and refine the sub-task results into a final output. The entire exchange log, including the objective, task breakdown, and refined final output, is saved to a Markdown file.
You can customize the script according to your needs:
This script is released under the MIT License.