github

abhigyanpatwari / GitNexus

  • воскресенье, 22 февраля 2026 г. в 00:00:02
https://github.com/abhigyanpatwari/GitNexus

GitNexus: The Zero-Server Code Intelligence Engine - GitNexus is a client-side knowledge graph creator that runs entirely in your browser. Drop in a GitHub repo or ZIP file, and get an interactive knowledge graph wit a built in Graph RAG Agent. Perfect for code exploration



GitNexus

Building git for agent context.

Indexes any codebase into a knowledge graph — every dependency, call chain, cluster, and execution flow — then exposes it through smart tools so AI agents never miss code.

npm version License: PolyForm Noncommercial

Gitnexus_CLI.1.mp4

Like DeepWiki, but deeper. DeepWiki helps you understand code. GitNexus lets you analyze it — because a knowledge graph tracks every relationship, not just descriptions.

TL;DR: The Web UI is a quick way to chat with any repo. The CLI + MCP is how you make your AI agent actually reliable — it gives Cursor, Claude Code, and friends a deep architectural view of your codebase so they stop missing dependencies, breaking call chains, and shipping blind edits. Even smaller models get full architectural clarity, making it compete with goliath models.


Two Ways to Use GitNexus

CLI + MCP Web UI
What Index repos locally, connect AI agents via MCP Visual graph explorer + AI chat in browser
For Daily development with Cursor, Claude Code, Windsurf, OpenCode Quick exploration, demos, one-off analysis
Scale Full repos, any size Limited by browser memory (~5k files)
Install npm install -g gitnexus No install —gitnexus.vercel.app
Storage KuzuDB native (fast, persistent) KuzuDB WASM (in-memory, per session)
Parsing Tree-sitter native bindings Tree-sitter WASM
Privacy Everything local, no network Everything in-browser, no server

CLI + MCP (recommended)

The CLI indexes your repository and runs an MCP server that gives AI agents deep codebase awareness.

Quick Start

# Index your repo (run from repo root)
npx gitnexus analyze

That's it. This indexes the codebase, installs agent skills, registers Claude Code hooks, and creates AGENTS.md / CLAUDE.md context files — all in one command.

To configure MCP for your editor, run npx gitnexus setup once — or set it up manually below.

MCP Setup

gitnexus setup auto-detects your editors and writes the correct global MCP config. You only need to run it once.

Editor Support

Editor MCP Skills Hooks (auto-augment) Support
Claude Code Yes Yes Yes (PreToolUse) Full
Cursor Yes Yes MCP + Skills
Windsurf Yes MCP
OpenCode Yes Yes MCP + Skills

Claude Code gets the deepest integration: MCP tools + agent skills + PreToolUse hooks that automatically enrich grep/glob/bash calls with knowledge graph context.

If you prefer manual configuration:

Claude Code (full support — MCP + skills + hooks):

claude mcp add gitnexus -- npx -y gitnexus@latest mcp

Cursor (~/.cursor/mcp.json — global, works for all projects):

{
  "mcpServers": {
    "gitnexus": {
      "command": "npx",
      "args": ["-y", "gitnexus@latest", "mcp"]
    }
  }
}

OpenCode (~/.config/opencode/config.json):

{
  "mcp": {
    "gitnexus": {
      "command": "npx",
      "args": ["-y", "gitnexus@latest", "mcp"]
    }
  }
}

CLI Commands

gitnexus setup                    # Configure MCP for your editors (one-time)
gitnexus analyze [path]           # Index a repository (or update stale index)
gitnexus analyze --force          # Force full re-index
gitnexus analyze --skip-embeddings  # Skip embedding generation (faster)
gitnexus mcp                     # Start MCP server (stdio) — serves all indexed repos
gitnexus serve                   # Start HTTP server for web UI connection
gitnexus list                    # List all indexed repositories
gitnexus status                  # Show index status for current repo
gitnexus clean                   # Delete index for current repo
gitnexus clean --all --force     # Delete all indexes
gitnexus wiki [path]             # Generate repository wiki from knowledge graph
gitnexus wiki --model <model>    # Wiki with custom LLM model (default: gpt-4o-mini)
gitnexus wiki --base-url <url>   # Wiki with custom LLM API base URL

What Your AI Agent Gets

7 tools exposed via MCP:

Tool What It Does repo Param
list_repos Discover all indexed repositories
query Process-grouped hybrid search (BM25 + semantic + RRF) Optional
context 360-degree symbol view — categorized refs, process participation Optional
impact Blast radius analysis with depth grouping and confidence Optional
detect_changes Git-diff impact — maps changed lines to affected processes Optional
rename Multi-file coordinated rename with graph + text search Optional
cypher Raw Cypher graph queries Optional

When only one repo is indexed, the repo parameter is optional. With multiple repos, specify which one: query({query: "auth", repo: "my-app"}).

Resources for instant context:

Resource Purpose
gitnexus://repos List all indexed repositories (read this first)
gitnexus://repo/{name}/context Codebase stats, staleness check, and available tools
gitnexus://repo/{name}/clusters All functional clusters with cohesion scores
gitnexus://repo/{name}/cluster/{name} Cluster members and details
gitnexus://repo/{name}/processes All execution flows
gitnexus://repo/{name}/process/{name} Full process trace with steps
gitnexus://repo/{name}/schema Graph schema for Cypher queries

2 MCP prompts for guided workflows:

Prompt What It Does
detect_impact Pre-commit change analysis — scope, affected processes, risk level
generate_map Architecture documentation from the knowledge graph with mermaid diagrams

4 agent skills installed to .claude/skills/ automatically:

  • Exploring — Navigate unfamiliar code using the knowledge graph
  • Debugging — Trace bugs through call chains
  • Impact Analysis — Analyze blast radius before changes
  • Refactoring — Plan safe refactors using dependency mapping

Multi-Repo MCP Architecture

GitNexus uses a global registry so one MCP server can serve multiple indexed repos. No per-project MCP config needed — set it up once and it works everywhere.

flowchart TD
    subgraph CLI [CLI Commands]
        Setup["gitnexus setup"]
        Analyze["gitnexus analyze"]
        Clean["gitnexus clean"]
        List["gitnexus list"]
    end

    subgraph Registry ["~/.gitnexus/"]
        RegFile["registry.json"]
    end

    subgraph Repos [Project Repos]
        RepoA[".gitnexus/ in repo A"]
        RepoB[".gitnexus/ in repo B"]
    end

    subgraph MCP [MCP Server]
        Server["server.ts"]
        Backend["LocalBackend"]
        Pool["Connection Pool"]
        ConnA["KuzuDB conn A"]
        ConnB["KuzuDB conn B"]
    end

    Setup -->|"writes global MCP config"| CursorConfig["~/.cursor/mcp.json"]
    Analyze -->|"registers repo"| RegFile
    Analyze -->|"stores index"| RepoA
    Clean -->|"unregisters repo"| RegFile
    List -->|"reads"| RegFile
    Server -->|"reads registry"| RegFile
    Server --> Backend
    Backend --> Pool
    Pool -->|"lazy open"| ConnA
    Pool -->|"lazy open"| ConnB
    ConnA -->|"queries"| RepoA
    ConnB -->|"queries"| RepoB
Loading

How it works: Each gitnexus analyze stores the index in .gitnexus/ inside the repo (portable, gitignored) and registers a pointer in ~/.gitnexus/registry.json. When an AI agent starts, the MCP server reads the registry and can serve any indexed repo. KuzuDB connections are opened lazily on first query and evicted after 5 minutes of inactivity (max 5 concurrent). If only one repo is indexed, the repo parameter is optional on all tools — agents don't need to change anything.


Web UI (browser-based)

A fully client-side graph explorer and AI chat. No server, no install — your code never leaves the browser.

Try it now: gitnexus.vercel.app — drag & drop a ZIP and start exploring.

gitnexus_img

Or run locally:

git clone https://github.com/abhigyanpatwari/gitnexus.git
cd gitnexus/gitnexus-web
npm install
npm run dev

The web UI uses the same indexing pipeline as the CLI but runs entirely in WebAssembly (Tree-sitter WASM, KuzuDB WASM, in-browser embeddings). It's great for quick exploration but limited by browser memory for larger repos.


The Problem GitNexus Solves

Tools like Cursor, Claude Code, Cline, Roo Code, and Windsurf are powerful — but they don't truly know your codebase structure.

What happens:

  1. AI edits UserService.validate()
  2. Doesn't know 47 functions depend on its return type
  3. Breaking changes ship

Traditional Graph RAG vs GitNexus

Traditional approaches give the LLM raw graph edges and hope it explores enough. GitNexus precomputes structure at index time — clustering, tracing, scoring — so tools return complete context in one call:

flowchart TB
    subgraph Traditional["Traditional Graph RAG"]
        direction TB
        U1["User: What depends on UserService?"]
        U1 --> LLM1["LLM receives raw graph"]
        LLM1 --> Q1["Query 1: Find callers"]
        Q1 --> Q2["Query 2: What files?"]
        Q2 --> Q3["Query 3: Filter tests?"]
        Q3 --> Q4["Query 4: High-risk?"]
        Q4 --> OUT1["Answer after 4+ queries"]
    end

    subgraph GN["GitNexus Smart Tools"]
        direction TB
        U2["User: What depends on UserService?"]
        U2 --> TOOL["impact UserService upstream"]
        TOOL --> PRECOMP["Pre-structured response:
        8 callers, 3 clusters, all 90%+ confidence"]
        PRECOMP --> OUT2["Complete answer, 1 query"]
    end
Loading

Core innovation: Precomputed Relational Intelligence

  • Reliability — LLM can't miss context, it's already in the tool response
  • Token efficiency — No 10-query chains to understand one function
  • Model democratization — Smaller LLMs work because tools do the heavy lifting

How It Works

GitNexus builds a complete knowledge graph of your codebase through a multi-phase indexing pipeline:

  1. Structure — Walks the file tree and maps folder/file relationships
  2. Parsing — Extracts functions, classes, methods, and interfaces using Tree-sitter ASTs
  3. Resolution — Resolves imports and function calls across files with language-aware logic
  4. Clustering — Groups related symbols into functional communities
  5. Processes — Traces execution flows from entry points through call chains
  6. Search — Builds hybrid search indexes for fast retrieval

Supported Languages

TypeScript, JavaScript, Python, Java, C, C++, C#, Go, Rust


Tool Examples

Impact Analysis

impact({target: "UserService", direction: "upstream", minConfidence: 0.8})

TARGET: Class UserService (src/services/user.ts)

UPSTREAM (what depends on this):
  Depth 1 (WILL BREAK):
    handleLogin [CALLS 90%] -> src/api/auth.ts:45
    handleRegister [CALLS 90%] -> src/api/auth.ts:78
    UserController [CALLS 85%] -> src/controllers/user.ts:12
  Depth 2 (LIKELY AFFECTED):
    authRouter [IMPORTS] -> src/routes/auth.ts

Options: maxDepth, minConfidence, relationTypes (CALLS, IMPORTS, EXTENDS, IMPLEMENTS), includeTests

Process-Grouped Search

query({query: "authentication middleware"})

processes:
  - summary: "LoginFlow"
    priority: 0.042
    symbol_count: 4
    process_type: cross_community
    step_count: 7

process_symbols:
  - name: validateUser
    type: Function
    filePath: src/auth/validate.ts
    process_id: proc_login
    step_index: 2

definitions:
  - name: AuthConfig
    type: Interface
    filePath: src/types/auth.ts

Context (360-degree Symbol View)

context({name: "validateUser"})

symbol:
  uid: "Function:validateUser"
  kind: Function
  filePath: src/auth/validate.ts
  startLine: 15

incoming:
  calls: [handleLogin, handleRegister, UserController]
  imports: [authRouter]

outgoing:
  calls: [checkPassword, createSession]

processes:
  - name: LoginFlow (step 2/7)
  - name: RegistrationFlow (step 3/5)

Detect Changes (Pre-Commit)

detect_changes({scope: "all"})

summary:
  changed_count: 12
  affected_count: 3
  changed_files: 4
  risk_level: medium

changed_symbols: [validateUser, AuthService, ...]
affected_processes: [LoginFlow, RegistrationFlow, ...]

Rename (Multi-File)

rename({symbol_name: "validateUser", new_name: "verifyUser", dry_run: true})

status: success
files_affected: 5
total_edits: 8
graph_edits: 6     (high confidence)
text_search_edits: 2  (review carefully)
changes: [...]

Cypher Queries

-- Find what calls auth functions with high confidence
MATCH (c:Community {heuristicLabel: 'Authentication'})<-[:CodeRelation {type: 'MEMBER_OF'}]-(fn)
MATCH (caller)-[r:CodeRelation {type: 'CALLS'}]->(fn)
WHERE r.confidence > 0.8
RETURN caller.name, fn.name, r.confidence
ORDER BY r.confidence DESC

Wiki Generation

Generate LLM-powered documentation from your knowledge graph:

# Requires an LLM API key (OPENAI_API_KEY, etc.)
gitnexus wiki

# Use a custom model or provider
gitnexus wiki --model gpt-4o
gitnexus wiki --base-url https://api.anthropic.com/v1

# Force full regeneration
gitnexus wiki --force

The wiki generator reads the indexed graph structure, groups files into modules via LLM, generates per-module documentation pages, and creates an overview page — all with cross-references to the knowledge graph.


Tech Stack

Layer CLI Web
Runtime Node.js (native) Browser (WASM)
Parsing Tree-sitter native bindings Tree-sitter WASM
Database KuzuDB native KuzuDB WASM
Embeddings HuggingFace transformers.js (GPU/CPU) transformers.js (WebGPU/WASM)
Search BM25 + semantic + RRF BM25 + semantic + RRF
Agent Interface MCP (stdio) LangChain ReAct agent
Visualization Sigma.js + Graphology (WebGL)
Frontend React 18, TypeScript, Vite, Tailwind v4
Clustering Graphology Graphology
Concurrency Worker threads + async Web Workers + Comlink

Roadmap

Actively Building

  • LLM Cluster Enrichment — Semantic cluster names via LLM API
  • AST Decorator Detection — Parse @Controller, @Get, etc.
  • Incremental Indexing — Only re-index changed files

Recently Completed

  • Wiki Generation, Multi-File Rename, Git-Diff Impact Analysis
  • Process-Grouped Search, 360-Degree Context, Claude Code Hooks
  • Multi-Repo MCP, Zero-Config Setup, 9 Language Support
  • Community Detection, Process Detection, Confidence Scoring
  • Hybrid Search, Vector Index

Security & Privacy

  • CLI: Everything runs locally on your machine. No network calls. Index stored in .gitnexus/ (gitignored). Global registry at ~/.gitnexus/ stores only paths and metadata.
  • Web: Everything runs in your browser. No code uploaded to any server. API keys stored in localStorage only.
  • Open source — audit the code yourself.

Acknowledgments