github

MLSysOps / MLE-agent

  • Π²Ρ‚ΠΎΡ€Π½ΠΈΠΊ, 3 сСнтября 2024β€―Π³. Π² 00:00:03
https://github.com/MLSysOps/MLE-agent

πŸ€– MLE-Agent: Your intelligent companion for seamless AI engineering and research. πŸ” Integrate with arxiv and paper with code to provide better code/research plans 🧰 OpenAI, Ollama, etc supported. πŸŽ† Code RAG



MLE-Agent: Your intelligent companion for seamless AI engineering and research.

kaia-llama

πŸ’Œ Fathers' love for Kaia πŸ’Œ

PyPI - Version Downloads GitHub License Join our Discord community

Overview

MLE-Agent is designed as a pairing LLM agent for machine learning engineers and researchers. It is featured by:

  • πŸ€– Autonomous Baseline Creation: Automatically builds ML/AI baselines.
  • πŸ” Arxiv and Papers with Code Integration: Access best practices and state-of-the-art methods.
  • πŸ› Smart Debugging: Ensures high-quality code through automatic debugger-coder interactions.
  • πŸ“‚ File System Integration: Organizes your project structure efficiently.
  • 🧰 Comprehensive Tools Integration: Includes AI/ML functions and MLOps tools for a seamless workflow.
  • β˜• Interactive CLI Chat: Enhances your projects with an easy-to-use chat interface.
mle_v030.mp4

Milestones

  • πŸš€ 07/25/2024: Release the 0.3.0 with huge refactoring, many integrations, etc (v0.3.0)
  • πŸš€ 07/11/2024: Release the 0.2.0 with multiple agents interaction (v0.2.0)
  • πŸ‘¨β€πŸΌ 07/03/2024: Kaia is born
  • πŸš€ 06/01/2024: Release the first rule-based version of MLE agent (v0.1.0)

Get started

Installation

pip install mle-agent -U
# or from source
git clone git@github.com:MLSysOps/MLE-agent.git
pip install -e .

Usage

mle new <project name>

And a project directory will be created under the current path, you need to start the project under the project directory.

cd <project name>
mle start

You can also start an interactive chat in the terminal under the project directory:

mle chat

Roadmap

The following is a list of the tasks we plan to do, welcome to propose something new!

πŸ”¨ General Features
  • Understand users' requirements to create an end-to-end AI project
  • Suggest the SOTA data science solutions by using the web search
  • Plan the ML engineering tasks with human interaction
  • Execute the code on the local machine/cloud, debug and fix the errors
  • Leverage the built-in functions to complete ML engineering tasks
  • Interactive chat: A human-in-the-loop mode to help improve the existing ML projects
  • Kaggle mode: to finish a Kaggle task without humans
  • Summary and reflect the whole ML/AI pipeline
  • Integration with Cloud data and testing and debugging platforms
  • Local RAG support to make personal ML/AI coding assistant
  • Function zoo: generate AI/ML functions and save them for future usage
⭐ More LLMs and Serving Tools
  • Ollama LLama3
  • OpenAI GPTs
  • Anthropic Claude 3.5 Sonnet
πŸ’– Better user experience
  • CLI Application
  • Web UI
  • Discord
🧩 Functions and Integrations
  • Local file system
  • Local code exectutor
  • Arxiv.org search
  • Papers with Code search
  • General keyword search
  • Hugging Face
  • SkyPilot cloud deployment
  • Snowflake data
  • AWS S3 data
  • Databricks data catalog
  • Wandb experiment monitoring
  • MLflow management
  • DBT data transform

Contributing

We welcome contributions from the community. We are looking for contributors to help us with the following tasks:

  • Benchmark and Evaluate the agent
  • Add more features to the agent
  • Improve the documentation
  • Write tests

Please check the CONTRIBUTING.md file if you want to contribute.

Support and Community

Star History

Star History Chart

License

Check MIT License file for more information.