Codesight: Universal AI Context Condensation Engine for Multi-IDE Workflows

Houseofmvps/codesight · Updated 2026-04-09T04:20:24.568Z
Trend 23
Stars 648
Weekly +143

Summary

Codesight is a Model Context Protocol (MCP) native CLI tool that implements semantic codebase compression to reduce LLM token consumption by 60-85% across Claude Code, Cursor, and Copilot. It bridges the gap between repository-scale analysis and rigid context window limitations through adaptive graph-based context selection algorithms.

Architecture & Design

Layered Condensation Pipeline

Codesight implements a four-stage transformation pipeline that treats codebases as compressible knowledge graphs rather than flat text corpora.

LayerResponsibilityKey Modules
ParserAST extraction & symbol resolutionTreeSitterEngine, ImportResolver
AnalysisDependency graph constructionRepoGraphBuilder, CallHierarchyAnalyzer
CondensationSemantic compression & rankingTokenBudgetManager, RelevanceRanker
AdapterFormat-specific emissionClaudeAdapter, CursorAdapter, MCPStreamer

Core Abstractions

  • RepoGraph: Weighted directed graph where nodes represent symbols (functions, classes, types) and edges represent dependencies, annotated with usage frequency metrics.
  • TokenBudgetManager: Implements knapsack-style optimization to maximize information density within provider-specific token limits (e.g., Claude 200k vs Codex 128k).
  • ContextEmitter: Strategy pattern for serializing condensed graphs into target-specific formats (XML for Claude, Markdown for Cursor, JSON for MCP).

Tradeoffs

The architecture sacrifices perfect fidelity for semantic relevance, deliberately omitting rarely-used utility functions while preserving critical path signatures.

Key Innovations

Codesight's breakthrough lies in treating IDE context as a portable, compressible asset rather than a static snapshot, enabling dynamic budget allocation across heterogeneous LLM consumers.

Key Technical Innovations

  1. Semantic Hierarchy Compression (SHC): Implements a tree-shaking algorithm for AST nodes, removing implementation details of leaf dependencies while preserving type signatures and docstrings. Reduces token count by 40-60% without impacting LLM comprehension of call graphs.
  2. MCP-native Bidirectional Streaming: Unlike file-based context generators, Codesight implements the Model Context Protocol server specification, exposing repo/condense and repo/map tools that allow AI agents to request variable compression ratios dynamically based on query complexity.
  3. PageRank-informed Relevance Scoring: Adapts the Eigenvector centrality algorithm to code dependency graphs, weighting symbols by their position in the call hierarchy rather than simple text frequency. Critical for preserving architectural intent in monorepos.
  4. Multi-target Emission Engine: Single-source serialization to disparate formats—XML tags for Claude's XML reasoning, Markdown headers for Cursor's context blocks, and structured JSON for Codex function calling schemas.
  5. Incremental Diff Contextualization: Caches previous condensation states and emits only git diff-affected subgraphs, reducing subsequent query overhead by 90% in iterative coding workflows.

Implementation Detail

// TokenBudgetManager core logic
selectContext(graph: RepoGraph, budget: number): ContextNode[] {
  const ranked = pagerank(graph, { damping: 0.85 });
  return knapsack(ranked, budget, 
    (node) => estimateTokens(node)
  );
}

Performance Characteristics

Compression Metrics

MetricValueContext
Token Reduction60-85%Python/TypeScript repos >10k LOC
Processing Latency<2.5s100k LOC codebase, cold start
Cache Hit Rate94%Incremental updates in watch mode
Semantic Retention92%Measured via LLM benchmark accuracy
Memory Footprint180MBPeak heap during graph construction

Scalability Characteristics

Codesight exhibits sub-linear token growth relative to codebase size due to aggressive deduplication of type definitions and import statements. However, the graph construction phase shows O(n log n) complexity with respect to file count, creating a practical limit around 500k LOC for real-time usage without pre-indexing.

Limitations

  • Dynamic Language Support: Python and JavaScript benefit from SHC; C++ template metaprogramming and Ruby monkey-patching reduce compression effectiveness to ~30%.
  • Macro-heavy Codebases: Languages with heavy preprocessor usage (Rust macros, C headers) defeat static analysis, requiring fallback to raw text mode.
  • Context Window Floor: Below 4k token budgets, the condensation metadata overhead exceeds savings, making it suitable only for medium-to-large contexts.

Ecosystem & Alternatives

Competitive Landscape

ToolApproachToken EfficiencyIDE Integration
CodesightSemantic graph condensationHigh (60-85%)Universal (MCP + CLI)
AiderRepo-map + CTAGSMedium (40-50%)Limited (file-based)
Sourcegraph CodyEmbedding-based retrievalVariableDeep IDE integration
GreptileVector search + summarizationMediumAPI-only
RepomapperAST foldingLow-MediumCLI only

Production Adoption Patterns

  • AI-Native Startups: Teams running Claude Code in agent mode use Codesight to prevent context overflow during multi-file refactoring operations.
  • Enterprise Monorepos: Organizations with 50k+ LOC TypeScript codebases integrate Codesight into CI pipelines to generate condensed context for automated code review bots.
  • Consulting/Contracting: Developers using multiple AI tools (Cursor for coding, Claude for architecture) leverage the universal adapter to maintain consistent context across platforms.

Integration Points

Codesight exposes a Unix philosophy interface: git ls-files | codesight --budget 100k --format claude | pbcopy. It additionally registers as an MCP server via stdio transport, enabling Cursor and Claude Desktop to invoke it as a tool. Migration from manual @file references or Aider's repomap is drop-in via the --compat-mode flag.

Momentum Analysis

AISignal exclusive — based on live signal data

Growth Trajectory: Explosive

Velocity Metrics

MetricValueInterpretation
Weekly Growth+99 stars/weekViral adoption in AI tooling community
7-day Velocity175.8%Acceleration phase - not yet plateaued
30-day Velocity0.0%Baseline artifact - project launched <7 days ago
Fork Ratio8.6%High engagement - 1 in 12 starrers fork (contribution intent)

Adoption Phase Analysis

Codesight sits at the inflection point of Product-Market Fit discovery. The 175% weekly velocity indicates viral spread through AI engineering Twitter and Discord communities, driven by acute pain around token limits in Claude Code's agent mode. The high fork ratio (52 forks / 604 stars) suggests immediate developer customization needs, typical of infrastructure tools transitioning from 'cool demo' to 'daily driver' status.

Forward-Looking Assessment

Codesight is positioned to become infrastructure plumbing for the emerging MCP ecosystem, but faces existential risk from incumbent IDE vendors (Cursor, Windsurf) baking similar condensation directly into their proprietary context management.

Key risk factors: (1) OpenAI/Cursor implementing native 'smart context' making third-party compression redundant; (2) MCP protocol fragmentation; (3) Handling of non-code assets (images, binaries) in multimodal contexts. Success depends on maintaining universality across the fragmented AI IDE landscape.