huggingface / ml-intern
- ΠΏΡΡΠ½ΠΈΡΠ°, 24 Π°ΠΏΡΠ΅Π»Ρ 2026β―Π³. Π² 00:00:02
π€ ml-intern: an open-source ML engineer that reads papers, trains models, and ships ML models
An ML intern that autonomously researches, writes, and ships good quality ML releated code using the Hugging Face ecosystem β with deep access to docs, papers, datasets, and cloud compute.
git clone git@github.com:huggingface/ml-intern.git
cd ml-intern
uv sync
uv tool install -e .ml-internCreate a .env file in the project root (or export these in your shell):
ANTHROPIC_API_KEY=<your-anthropic-api-key> # if using anthropic models
HF_TOKEN=<your-hugging-face-token>
GITHUB_TOKEN=<github-personal-access-token> If no HF_TOKEN is set, the CLI will prompt you to paste one on first launch. To get a GITHUB_TOKEN follow the tutorial here.
Interactive mode (start a chat session):
ml-internHeadless mode (single prompt, auto-approve):
ml-intern "fine-tune llama on my dataset"Options:
ml-intern --model anthropic/claude-opus-4-6 "your prompt"
ml-intern --max-iterations 100 "your prompt"
ml-intern --no-stream "your prompt"βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β User/CLI β
ββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ¬βββββββββββ
β Operations β Events
β (user_input, exec_approval, β
submission_queue interrupt, compact, ...) event_queue
β β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β submission_loop (agent_loop.py) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββ β β
β β 1. Receive Operation from queue β β β
β β 2. Route to handler (run_agent/compact/...) β β β
β ββββββββββββββββββββββββββββββββββββββββββββββββ β β
β β β β
β ββββββββββββββββββββββββββββββββββββββββββββββββ β β
β β Handlers.run_agent() β ββββ€
β β β β β
β β ββββββββββββββββββββββββββββββββββββββββββ β β β
β β β Agentic Loop (max 300 iterations) β β β β
β β β β β β β
β β β ββββββββββββββββββββββββββββββββββββ β β β β
β β β β Session β β β β β
β β β β ββββββββββββββββββββββββββββββ β β β β β
β β β β β ContextManager β β β β β β
β β β β β β’ Message history β β β β β β
β β β β β (litellm.Message[]) β β β β β β
β β β β β β’ Auto-compaction (170k) β β β β β β
β β β β β β’ Session upload to HF β β β β β β
β β β β ββββββββββββββββββββββββββββββ β β β β β
β β β β β β β β β
β β β β ββββββββββββββββββββββββββββββ β β β β β
β β β β β ToolRouter β β β β β β
β β β β β ββ HF docs & research β β β β β β
β β β β β ββ HF repos, datasets, β β β β β β
β β β β β β jobs, papers β β β β β β
β β β β β ββ GitHub code search β β β β β β
β β β β β ββ Sandbox & local tools β β β β β β
β β β β β ββ Planning β β β β β β
β β β β β ββ MCP server tools β β β β β β
β β β β ββββββββββββββββββββββββββββββ β β β β β
β β β ββββββββββββββββββββββββββββββββββββ β β β β
β β β β β β β
β β β ββββββββββββββββββββββββββββββββββββ β β β β
β β β β Doom Loop Detector β β β β β
β β β β β’ Detects repeated tool patterns β β β β β
β β β β β’ Injects corrective prompts β β β β β
β β β ββββββββββββββββββββββββββββββββββββ β β β β
β β β β β β β
β β β Loop: β β β β
β β β 1. LLM call (litellm.acompletion) β β β β
β β β β β β β β
β β β 2. Parse tool_calls[] β β β β
β β β β β β β β
β β β 3. Approval check β β β β
β β β (jobs, sandbox, destructive ops) β β β β
β β β β β β β β
β β β 4. Execute via ToolRouter β β β β
β β β β β β β β
β β β 5. Add results to ContextManager β β β β
β β β β β β β β
β β β 6. Repeat if tool_calls exist β β β β
β β ββββββββββββββββββββββββββββββββββββββββββ β β β
β ββββββββββββββββββββββββββββββββββββββββββββββββ β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββ
User Message
β
[Add to ContextManager]
β
βββββββββββββββββββββββββββββββββββββββββββββ
β Iteration Loop (max 300) β
β β
β Get messages + tool specs β
β β β
β litellm.acompletion() β
β β β
β Has tool_calls? ββNoββ> Done β
β β β
β Yes β
β β β
β Add assistant msg (with tool_calls) β
β β β
β Doom loop check β
β β β
β For each tool_call: β
β β’ Needs approval? ββYesββ> Wait for β
β β user confirm β
β No β
β β β
β β’ ToolRouter.execute_tool() β
β β’ Add result to ContextManager β
β β β
β Continue loop ββββββββββββββββββ β
β β β β
β βββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββ
The agent emits the following events via event_queue:
processing - Starting to process user inputready - Agent is ready for inputassistant_chunk - Streaming token chunkassistant_message - Complete LLM response textassistant_stream_end - Token stream finishedtool_call - Tool being called with argumentstool_output - Tool execution resulttool_log - Informational tool log messagetool_state_change - Tool execution state transitionapproval_required - Requesting user approval for sensitive operationsturn_complete - Agent finished processingerror - Error occurred during processinginterrupted - Agent was interruptedcompacted - Context was compactedundo_complete - Undo operation completedshutdown - Agent shutting downEdit agent/core/tools.py:
def create_builtin_tools() -> list[ToolSpec]:
return [
ToolSpec(
name="your_tool",
description="What your tool does",
parameters={
"type": "object",
"properties": {
"param": {"type": "string", "description": "Parameter description"}
},
"required": ["param"]
},
handler=your_async_handler
),
# ... existing tools
]Edit configs/main_agent_config.json:
{
"model_name": "anthropic/claude-sonnet-4-5-20250929",
"mcpServers": {
"your-server-name": {
"transport": "http",
"url": "https://example.com/mcp",
"headers": {
"Authorization": "Bearer ${YOUR_TOKEN}"
}
}
}
}Note: Environment variables like ${YOUR_TOKEN} are auto-substituted from .env.