3,734 tests passing

CAM-PULSE

Stop starting from zero. Start building from experience.

CAM-PULSE mines reusable patterns from real repos, remembers what works, and injects that knowledge into your AI-assisted builds — with full attribution and test verification. No hallucinated success. No forgotten context. Every build draws on 3,590 methodologies from 55 source repositories across 5 language-specific brains. A paired blind A/B test proves the impact: KB-equipped agents achieve 92.3% vs 73.1% success (p=0.015, Wilcoxon).

For developers who want AI that learns, not just generates. Free and MIT licensed.

3,590 Learned patterns
55 Source repos mined
3,734 Tests passing
+33.6% KB quality lift
cam mine-self --quick
CAM-PULSE terminal demo showing mine-self analysis with 3,734 tests passing
The Loop That Learns
Discover → Mine → Store → Retrieve → Route → Build → Verify → Learn → Self-Enhance

Every cycle feeds back. Successes strengthen patterns. Failures update priors. The system gets measurably better with use—not just bigger.

Scroll
0
Tests Passing
5
Agent Backends
0
Methodologies
0
Source Repos
21
Showpieces
$0
MIT Licensed

Four things CAM-PULSE does that no other AI coding tool does

Not roadmap items. Not "coming soon." Each one is live, tested, and documented with real output.

🔍 Discover

Autonomously finds repos developers are sharing right now, filters for novelty against what it already knows, and keeps its knowledge fresh.

  • X-Scout scans X/Twitter in real time for repos worth learning from
  • GitHub and HuggingFace repos mined through the same pipeline
  • Freshness monitor detects stale knowledge and auto-triggers re-mining

cam pulse scan --keywords "..."

🧠 Learn

Doesn't just store code snippets. Extracts reusable engineering patterns with lifecycle tracking, fitness scoring, and cross-pattern co-retrieval. Ships with 31 seed methodologies so every fresh install already knows CAM's own algorithms.

  • Three-pass pipeline: classify free → check overlap → focused LLM extraction
  • EMA fitness scoring: patterns that work rise; patterns that fail decay
  • Seed knowledge: 31 curated patterns bootstrap every new install—self-aware from first run
  • Yield-priority mining: 5-factor scoring ranks repos by expected knowledge yield before spending tokens
  • 3,590 methodologies across 5 specialist ganglia, federated for cross-query

cam kb seed   cam mine --yield-sort

🔨 Build

Routes tasks to the best agent using Bayesian Kelly Criterion, injects learned patterns, and self-corrects when a run fails.

  • Kelly routing picks the best agent per task type from real performance data
  • Four backends: Claude, GPT, Gemini, Grok, or fully local Ollama / MLX-LM
  • Inner correction loop restores bytes and re-prompts with violations
  • "90% coverage" can become a hard verification gate

cam create --execute --check "pytest -q"

Verify

Every build is checked against real diffs, real tests, and real metrics. Failures are reported plainly.

  • If no files changed, the run fails instead of pretending success
  • TruffleHog secret scanning blocks leaked credentials before mining
  • Uncertainty-aware fitness: low-sample agents get discounted confidence
  • cam forge-benchmark reports "0% lift" when that is the truth

cam security scan   cam validate

How It Works

Six stages. Multi-pass mining. Bayesian routing. Knowledge injection. The loop keeps running.

🔍

Discover

X-Scout + HF

Filter

Novelty + dedup

🧠

Mine

Three-pass pipeline

🎯

Route

Kelly Criterion

🔨

Build

Inject + generate

Verify

Tests + attribution

Discover → mine → store → retrieve → route → build → verify → learn.

Inspect Mine Ideate Spec Execute Validate Benchmark Self-Enhance

The 8-step CAM workflow. The verify → correct loop runs inside execution and emits artifacts you can inspect.

See It In Action

Real commands. Real output. Copy and run.

# Install and verify
git clone https://github.com/deesatzed/CAM-Pulse.git
cd CAM-Pulse
pip install -e ".[dev]"

# Smoke test
cam --help
cam govern stats
cam pulse preflight
# One-shot X scan: search, filter, learn
cam pulse scan \
  --keywords "AI agent framework,new repo" \
  --from-date 2026-03-21

# Dry run (scan + filter only, no cloning/mining)
cam pulse scan --dry-run

# View results
cam pulse discoveries --limit 20
cam pulse status
# Start perpetual polling daemon (default: every 30 min)
cam pulse daemon

# Custom interval
cam pulse daemon --interval 15

# View scan history and daily report
cam pulse scans
cam pulse report

# Docker swarm deployment
docker compose -f pulse/docker-compose.pulse.yml up -d
# Quick preview: file stats + domain signals (no LLM, free)
cam mine-self --quick

# Full LLM-powered mining of your own project
cam mine-self

# Mine a specific project directory
cam mine-self --path /path/to/project

# Extract patterns without generating tasks
cam mine-self --no-tasks
# Check all tracked repos for staleness
cam pulse freshness --verbose

# Seed freshness metadata for existing discoveries
cam pulse freshness --seed

# Auto-refresh stale repos (significance >= 0.4)
cam pulse freshness --auto-refresh

# Re-mine a specific repo
cam pulse refresh https://github.com/bytedance/deer-flow

# Re-mine all stale repos
cam pulse refresh --all

# Preview without modifying
cam pulse refresh --all --dry-run
# Ingest a HuggingFace model repo (auto-detects URL type)
cam pulse ingest https://huggingface.co/microsoft/phi-3-mini-4k-instruct

# Works with GitHub too — same command
cam pulse ingest https://github.com/bytedance/deer-flow

# Ingest HF repo by ID with revision control
cam pulse ingest-hf sentence-transformers/all-MiniLM-L6-v2 --revision main

# Force re-ingest even if already learned
cam pulse ingest --force https://huggingface.co/BAAI/bge-small-en-v1.5
# Scan directories for repos (preview only, no LLM)
cam mine-workspace ~/projects ~/experiments --scan-only

# Mine up to 20 repos across directories
cam mine-workspace ~/projects --max-repos 20

# Only mine repos that changed since last scan
cam mine-workspace ~/code --changed-only

# Deep scan with higher directory depth
cam mine-workspace ~/projects --depth 8
# Enable Kelly routing in claw.toml
# [kelly]
# enabled = true
# kappa = 10.0      # Shrinkage factor (higher = more conservative)
# f_max = 0.40      # Maximum Kelly fraction cap

# Run tasks — Kelly picks the best agent from real performance data
cam create --execute --type architecture --workspace ./my-project
# Kelly routing: task_type='architecture' -> agent 'claude'
# (weights: claude=0.376, gemini=0.261, grok=0.261, codex=0.102)

# A/B test knowledge ablation with adaptive margins
cam ab-test start knowledge_ablation
cam ab-test status
# Samples: 24/20  Margin adapts: n=5 -> 0.05, n=50 -> 0.125

# View agent performance scores
cam doctor
# Analyze knowledge gaps across your brain
cam gaps

# Discover new categories from unmapped patterns
cam gaps --discover

# Focus on a specific category
cam gaps --category security

# Exploratory retrieval: epsilon re-ranking + stratified corpus
cam learn search "circuit breaker patterns" --explore
$ cam pulse scan === PULSE Scan Report === Keywords: github.com new AI agent repo Discovered: 18 Novel: 16 Assimilated: 16 Skipped: 2 Failed: 0 $ cam learn search "multi-agent routing" -n 3 Search results for "multi-agent routing" 0.89 Thompson Sampling for Multi-Agent Bayesian Routing [bug-ops/zeph] 0.84 Structured Multi-Agent Role System with Typed Capabilities [cronusl-1141/ai-company] 0.81 DAG-Based Task Dependency Resolution [shunsukehayashi/agent-skill-bus]

Proof Points

Each proof point was run, checked, and documented with real output. Grouped by the question it answers.

Does CAM's Knowledge Actually Help?

01
Live Discovery
16/16 repos discovered, mined, and stored in one command

PULSE Knowledge Loop

Discovers 18 repos from live X feeds, filters for novelty, clones and mines 16 novel repos into 86 new methodologies.

cam pulse scan 16/16 assimilated 86 methodologies 0 failures
Live Scan Results (March 22, 2026)
Discovered: 18 repos from X
Novel: 16 (2 already known)
Assimilated: 16/16 (0 failed)
Methodologies: 86 new patterns stored
JSON repair: 100% recovery (16/16 repaired)
02
Cross-Repo Synthesis
3→1 repos synthesized into one working module, 5/5 tests

Plugin Event System

Retrieves patterns from 3 mined repos and builds a cohesive event system: bus, middleware, plugin loader, loop detection. 258 lines.

3 repos → 1 module 5/5 tests full attribution
Live Results
Retrieved=3 | Used=3 | Attributed=3 | Quality=0.82
258 source lines across 5 modules
Event bus + middleware + plugin loader + loop detection
Knowledge compounds: Build A patterns inform Build B
03
Agent Self-Repair
10/10 tests passing after self-correction

Inner Correction Loop

On failure, byte-level restore resets the workspace and re-prompts the agent with violations plus test output. Drift 0.868, quality 0.76, lifecycle transition embryonic → viable.

workspace restore violation feedback 28 tests up to 3 retries
Live Run Results (March 24, 2026)
Run 1: Correction loop triggered 3x (restore + feedback confirmed)
Run 2: First-attempt success — 10/10 tests | Drift: 0.868 | Quality: 0.76
Knowledge: 2 PULSE patterns injected | Lifecycle: embryonic → viable

What Can CAM Build From Learned Patterns?

04
Knowledge Application

PULSE Usage Proof

Mined knowledge is retrieved, injected into agent prompts, and used to produce working code with passing tests. Retrieved=3, Used=3, Attributed=3.

Live Run Results (March 23, 2026)
Task: Pre-tool-call guardrail system
Agent output: 157 lines, 4 files, quality 0.85
Tests: 4/4 passing | Attribution: 3 patterns traced
05
Self-Evolution

Self-Enhancement Pipeline

CAM clones its own source, enhances the copy, validates through 7 gates, and atomically swaps the live install. Proven: quality 0.97, all 3,734 tests pass on the enhanced copy.

7-gate validation atomic swap quality 0.97
06
Smart Mining

Multi-Pass Mining Pipeline

Three-pass mining: domain classification, knowledge overlap assessment, and focused LLM extraction with adaptive token budget. Nine repos, 52 methodologies.

Pipeline Stages
Pass 1: Domain classification (10 categories, 0 cost)
Pass 2: KB overlap (overlap_score, suggested_focus)
Pass 3: LLM mining with structured context
07
Knowledge Application

Repo Upgrade Advisor

Ranked improvement recommendations with confidence scores from mined knowledge. Evaluates a repo and produces enhancement plans backed by methodology evidence.

cam evaluate confidence scores
08
Escalating Complexity

Expectation Ladder

Five levels of escalating complexity: health check → build → validate → mine → self-improve. Each level exercises more of CAM's pipeline.

5 levels progressive validation
09
Distributed Intelligence

CAM Swarm — Ganglion Federation + Drive-Ops

Multiple specialist CAM ganglia federate into a single CAM Brain. The Drive-Ops ganglion mined 1.5TB of local repos in 16 batches, producing 1,046 methodologies from 63 unique repos with 82 duplicates caught by content-hash dedup. Read-only FTS5 cross-queries pull knowledge across siblings when local confidence is low.

Drive-Ops Results
1,046 methodologies | 63 repos | 1.5TB drive scanned | 82 duplicates caught
16 batches | 3.5 hours | cross-query proven | brain manifests live
federation drive-ops ganglion content dedup FTS5 cross-query
10
Structured Verification

Metric Expectations

Natural-language specs become structured verification gates. "90% coverage" can be enforced as a hard gate. 51 tests cover extraction and enforcement.

Supported Metrics
min_coverage_pct | min_test_count
min_files_changed | max_files_changed
Operators: gte, gt, lte, lt, eq | Hard/soft gates

How Does the Discovery Pipeline Work?

11
Knowledge Maintenance

Repo Freshness Monitor

Phase 1 uses cached metadata checks. Phase 2 scores significance. Only important repo changes trigger a re-mine.

Significance Formula
commits * 0.3 + new_release * 0.4
+ readme_changed * 0.2 + size_delta * 0.1
12
Supply Chain Defense

Secret Scanning

Two-gate TruffleHog plus regex fallback. Every repo is scanned before mining. Critical findings block the pipeline.

Verified cam security scan src/ → CLEAN (0 findings)
cam security status → TruffleHog AVAILABLE
13
Autonomous Evolution

Self-Enhancement — CAM Improves Its Own Code

After mining new knowledge, CAM clones itself, runs its own build pipeline against the copy using newly learned patterns, validates through 7 gates (syntax, config, imports, DB schema, CLI smoke, full pytest, diff summary), and swaps only if ALL gates pass. Protected files (verifier, factory, engine, schema, config) require human review even if gates pass. Latest run: knowledge application demo produced a working async microservice with 3 PULSE patterns injected per attempt, drift 0.894, 12 files created. Self-enhancement: all 7 gates passed, 3,734 tests in 203.4s, atomic swap completed.

Latest Validation Run (March 2026)
7 gates PASSED | 3,734 tests | 203.4s | 5 protected files
Knowledge demo: 3 PULSE patterns injected | drift 0.894 | 12 files
clone → enhance → validate → swap → post-swap → rollback-ready
self-enhance 7-gate validation compiler-bootstrap protected files atomic swap 3,734 tests
14
Live Knowledge Application

End-to-End Knowledge Injection — From Mined Patterns to Architecture-Informed Code

CAM retrieves 3 PULSE patterns from 3,590 stored methodologies and injects them into the agent prompt alongside 12 evaluation hints. The agent produces architecture-informed code: endpoint separation, idempotent request tracking, and async SQLite persistence. The verifier checks drift alignment (0.894) and executes tests. On success, the EMA fitness feedback loop records the outcome, strengthening the patterns that contributed. Patterns traced back to mined source repos: Aegis_Atlas, CLI-Anything, and ClawTeam.

Live Run Results (March 2026)
Retrieved: 3 PULSE patterns from 3,590 methodologies
Prompt: 12 evaluation hints + 3 injected patterns
Output: async microservice | 12 files | endpoint separation
Verification: drift 0.894 | tests passing | idempotent tracking
Fitness: EMA feedback recorded | lifecycle transition confirmed
Attribution: Aegis_Atlas, CLI-Anything, ClawTeam
knowledge injection 3 patterns retrieved 3,590 methodologies drift 0.894 EMA fitness full attribution
15
Adaptive Intelligence

Bayesian Kelly Agent Routing

Sukhov (2026) Bayesian Kelly Criterion sizes agent task allocation from Beta posteriors. Kappa-shrinkage prevents overconfidence with small samples. Uncertainty discount reduces fitness scores for unreliable agents' methodologies up to 30%. Adaptive A/B margins demand bigger effects from thin data — no premature conclusions.

kelly.py + dispatcher.py — real routing data
f* = (p̄ - (1-p̄)/b) × n_eff / (n_eff + κ)
Kelly routing: architecture → claude 37.6% | gemini 26.1% | grok 26.1% | codex 10.2%
Kelly routing: analysis → claude 61.5% | codex 17.7% | gemini 17.7% | grok 3.1%
39 tests | kappa=10.0 | f_max=0.40 | exploration_floor=2%
bayesian kelly-criterion agent-routing 39 tests
16
A/B Knowledge Impact
7/8 quality checks won by KB-equipped agent

Retry Logic A/B Test — Qualitative Proof

Same task, two configurations. Run A: empty KB. Run B: full KB with 3,590 methodologies. Task: add retry logic with exponential backoff. Run B retrieved 5 battle-tested patterns from 4 repos in 1.4s and won 7 out of 8 quality checks.

Measured Results (April 2026)
Retrieved: 5 patterns from 4 repos
Run B wins: 429 awareness, jitter, bounded delay, error classification,
shared helper, error context, structured logging, fast-fail
Scorecard: KB-equipped 7/8 | Base 0/8
A/B test 7/8 quality wins qualitative
17
Statistical A/B Proof
+33.6% composite quality lift — 3/6 dimensions p<0.05

SkyDate SWE A/B Test — Blind, Statistical, 6-Dimensional

Full-stack SWE code generation on a Next.js history app. Blind 50/50 routing: control suppresses ALL knowledge (past_solutions + CAG corpus). 23 tasks, 6 evaluation phases, fully autonomous. KB-equipped arm: 100% success rate (8/8) vs 67% control (10/15). Near-zero variance (±0.001) proves consistent quality, not luck.

Statistical Results (April 2026)
Composite: 0.699 vs 0.523 — +33.6% | Cohen's d = 0.843 (large)
D1 Functional Correctness: p = 0.039 (significant)
D2 Structural Compliance: p = 0.024 (significant)
D6 Expectation Match: p = 0.039 (significant)
Success rate: 100% vs 67% | Variant variance: ±0.001
6 dimensions: Functional, Structural, Intent, Correction, Token, Expectation
blind A/B p<0.05 Cohen's d 0.843 100% success 6-dim SWE metric 23 tasks zero variance
18
RL Core Infrastructure
40 tests Epsilon-greedy + Thompson sampling — wired into every task

RL Method Tournament — Bandit Selection, Forbidden-on-Retry, Thompson Graduation

Every task now goes through an RL bandit that selects the best methodology from 3,590 mined patterns. 90% exploit (pick the best), 10% explore (discover hidden gems). After 5+ observations per (methodology, task_type), graduates from epsilon-greedy to Thompson sampling using Beta posteriors. Methods that fail twice on a task are forbidden on retry, forcing iteration through the ranked list. Cold-start protection gives under-tested methods 20% exploration.

Live Proof (April 2026)
3 queries against real DB (3,590 methodologies, real embeddings)
PRIMARY + CONTEXT selection verified on each query
Bandit outcomes recorded: 3 rows in methodology_bandit_outcomes
Forbidden-on-retry: 2 failures → excluded on next evaluate()
Relevance floor: 0.3 min score, skip KB if nothing qualifies
Infrastructure failures do NOT penalize methodologies
epsilon-greedy Thompson sampling forbidden-on-retry cold start 20% 40 tests live DB proof wired into core
19
Cross-Brain Intelligence
40 results × 5 brains One query. Seven brains. Patterns no single brain could find alone.

Cross-Brain Pattern Atlas — "Design defense-in-depth security for a multi-tenant AI agent gateway"

That single query searched 3,590 methodologies across all 5 brains (General, Drive-ops, Rust, TypeScript, Agentic-memory, Go, Misc) simultaneously. Result: Rust’s WASM sandbox with dual metering + taint tracking. Go’s five-layer permission lattice + AES-GCM encrypted secret storage. TypeScript’s defense-in-depth API gating + tiered AI gateway. Python’s pre-ingestion secret scanning + RLS multi-tenancy. Two universal patterns discovered automatically (multi-tenant isolation appeared in both Python and Go; MCP proxy appeared in both Rust and TypeScript). 108 transferable insights generated — each one says “this brain’s technique has no equivalent in that brain.” An 8-layer composition assembled the best pattern from each domain into a layered architecture. No LLM hallucination. Every result is a real methodology mined from a real repo.

Live Output (April 2026)
$ cam federate "defense-in-depth security for a multi-tenant AI agent gateway"

Brains queried: 7 | Results: 40 | Coverage: 100%

UNIVERSAL PATTERNS (same concept, multiple languages):
• Multi-Tenant Dataset Isolation [python, go] (overlap: 0.32)
• MCP Proxy for Multi-Repo Support [rust, typescript] (overlap: 0.30)

UNIQUE INNOVATIONS (10 per brain):
• [rust] WASM Sandbox with Dual Metering (from openfang)
• [rust] Merkle Hash-Chain Audit Trail (from openfang)
• [rust] Information Flow Taint Tracking (from openfang)
• [go] Five-Layer Permission Lattice (from goclaw)
• [go] AES-GCM Encrypted Secret Storage (from goclaw)
• [go] Security Audit with Risk Scoring (from skillshare)
• [typescript] Defense-in-Depth API Gating (from dram-quest)
• [typescript] Secure AI Gateway + Tiered Models (from dram-quest)
• [general] Pre-Ingestion Secret Scanning (from a_aSatzClaw)
• [general] Four-Technique PHI Redaction (from a_a_betaQ)

8-LAYER COMPOSITION:
L1 [rust] AI integration → L2 [general] architecture
L3 [rust] CLI UX → L4 [rust] code quality
L5 [go] cross-cutting → L6 [go] data processing
L7 [typescript] design patterns → L8 [general] security

108 transferable insights | 3,734 tests passing | 0 regressions
5 brains queried 40 real results 2 universal patterns 108 transferable insights 8-layer composition 3,590 methodologies zero hallucination
20
Paired A/B Proof — Statistically Significant
p = 0.015 Wilcoxon signed-rank — 26 paired within-subject comparisons

Knowledge Ablation: Same Task, Same Agent, Blind A/B

The definitive test: each of 26 coding tasks was run twice on the same agent — once with the full knowledge base (3,590 methodologies), once with knowledge suppressed. Neither the agent nor the verifier knows which arm it’s in. The paired design eliminates agent confounding and task difficulty variance, requiring 4× fewer samples than unpaired experiments. Result: knowledge-equipped agents succeeded on 92.3% of tasks vs 73.1% control. On the 7 discordant pairs (one arm succeeded, one failed), the variant won 6 out of 7. When a task is borderline — hard enough that the agent might fail — the knowledge base tips it toward success.

Statistical Results (April 7, 2026 — 39 min, 5 agents, 26 pairs)
Success: 92.3% variant vs 73.1% control | +19.2 pp lift
Composite: 0.804 vs 0.660 — +0.144 mean paired diff
Wilcoxon signed-rank (one-sided): W=122, p=0.0153
Paired t-test (one-sided): t=2.248, p=0.0168
Bootstrap 95% CI: [+0.023, +0.270] (excludes zero)
Cohen’s dz = 0.45 (medium effect)
McNemar discordant: 6 variant wins / 1 control win
Win/Tie/Loss: 9 / 15 / 2

Per agent: codex +0.116 | local +0.182 | grok +0.275 | claude +0.017 | gemini +0.138
All 5 agents show positive mean diff — KB effect is agent-independent
paired design p=0.015 Cohen’s dz 0.45 92.3% vs 73.1% 6:1 discordant 5 agents, all positive blind within-subject bootstrap CI excludes 0 full interactive report
21
Real CLI Tool — Built From Mined Knowledge
1,346,855 files indexed in a single real-world scan — 97.55 GB, zero dependencies

TidyHome — The Tool You’ll Actually Use Tomorrow

Not a benchmark. Not a meta-tool. Not a synthetic A/B report. A real Python CLI that cam create --execute built end-to-end from CAM-PULSE’s file-management, CLI, scanning, and dedup methodology families. Then verified against an actual home directory: 1.35 million files, 97.55 GB, 360 permission errors handled gracefully. Smart suggestions surfaced 4.8 GB clearly reclaimable (cache + installers), 27.9 GB in ML model files, 63,300 files untouched for 1+ year. 38 tests. 90% coverage. 16/16 validation steps pass in the harness.

Verified end-to-end (April 10, 2026)
Source: 8 modules, 459 statements, 0 pip deps (Python stdlib only)
Tests: 38 passed, 0 failed, 90% code coverage
Harness: 16/16 validation steps pass, exit 0

Real scan: 1,346,855 files indexed / 360 skipped
Total size: 97.55 GB across 1,211 unique extensions
SQLite index: 529 MB at ~/.tidyhome/index.db

Smart suggestions:
  [CACHE] 202,918 files 3.5 GB reclaimable
  [INSTALLER] 623 files 1.3 GB reclaimable
  [ARCHIVE] 6,051 files 4.0 GB reclaimable
  [MODEL] 411 files 27.9 GB (redownloadable)
  [STALE-1Y] 63,300 files untouched 1+ year

KB sources: MiroFish, abacus_FileSearch, CLI-Anything,
  app_organizer, AMM + Rust/Go/Python brain resilience
1.35M files scanned 97.55 GB indexed zero pip deps 38 tests, 90% coverage 16/16 harness steps 4.8 GB reclaimable dry-run by default full writeup
NEW — Web UI

See Everything.
Control Everything.

14 interactive pages. 40 real API endpoints. Every feature CAM has, now in your browser. Search knowledge, watch agents execute, build new brains, track evolution, mine repos — all from a single interface.

localhost:3000
Dashboard
3,590
Methods
5
Brains
55
Repos
92.3%
KB Win
Lifecycle
active
matured
retired
Languages
Python 68%
TS 28%
Rust 3%
localhost:3000/playground
Playground — Execution Theater
“Add retry logic with exponential backoff”
7-Gate Verification
Step Log
✓ grab — retrieved 5
✓ evaluate — assessed
✓ decide — selected agent
↻ act — writing 3 files...
Corrections
Attempt 1: 2 violations
Attempt 2: all gates pass
localhost:3000/knowledge/gaps
Gap Heatmap — Coverage Matrix
Python
TS
Rust
Go
Misc
arch
42
18
9
4
1
testing
38
12
5
2
-
api
22
31
-
6
-
security
5
-
7
1
-
▶ security × Python selected — Mine this gap | Search related
localhost:3000/evolution
Evolution Lab — Fitness Trajectories
A/B Tests
Routing
Fitness
Bandit
Apr 1Apr 3Apr 5Apr 7Apr 9
novelty
0.82
reuse
0.91
correct
0.88
Brain Graph Visualization
7-Gate Live Verification
Correction Replay
Gap-to-Mine Pipeline
Budget Tracking
Cross-Brain Federation
# Start CAM-PULSE Web UI
cam dashboard & # FastAPI on :8420
cd forge-ui && npm install && npm run dev # Next.js on :3000

Architecture & Deployment

Four agent backends. Bayesian Kelly routing. Three deployment paths. One unified pipeline.

OpenRouter

User-configured via .env

Access to Claude, GPT, Gemini, and dozens more through one routing layer.

analysis documentation

Google Gemini

gemini-embedding-2-preview

Embeddings, repo comprehension, and deep dependency analysis.

dependency analysis comprehension

xAI Grok

grok-4 family

Native X search, quick fixes, and live web lookup for the discovery pipeline.

quick fixes web lookup x_search

Ollama / MLX-LM

llama3.2 (local)

Private, offline, and zero cloud. Native Apple Silicon acceleration.

offline private Apple Silicon

Bayesian Kelly Routing Overlay

Instead of static routing tags, Kelly Criterion selects the best agent per task type from real win/loss data. Each agent maintains a Beta posterior; kappa-shrinkage keeps decisions conservative until enough evidence accumulates. Priority chain: Kelly (with data) → recommended → exploration → learned → static → fallback. Real routing weights from production: architecture: claude 37.6% | gemini 26.1% | grok 26.1% | codex 10.2%

Deploy Anywhere

📦

pip install

Lightweight and direct. No torch required.

pip install -e .
🐳

Docker

One command. Everything included.

docker compose up --build
🐝

PULSE Swarm

Multi-container deployment with dedicated scout workers.

docker compose -f pulse/docker-compose.pulse.yml up -d
# pulse/docker-compose.pulse.yml
services:
  pulse-orchestrator:
    command: ["cam", "pulse", "daemon"]
    restart: unless-stopped

  pulse-scout-ai:
    command: ["cam", "pulse", "scan", "--keywords",
             "AI agent framework new repo github.com"]

  pulse-scout-tools:
    command: ["cam", "pulse", "scan", "--keywords",
             "developer tools CLI open source github.com"]

Where CAM-PULSE Fits

Every AI coding tool generates code. Only CAM discovers patterns autonomously, remembers what works across repos, routes to the best agent per task, and proves what it did with real tests.

Capability GitHub Copilot Cursor Windsurf Aider CAM-PULSE
Verifies diffs actually happened Yes — fails if nothing changed
Persistent cross-repo memory Workspace only Session only Yes — 3,590 methodologies + lifecycle
Discovers new patterns autonomously Yes — X-Scout + auto-learning
Applies learned knowledge to builds Yes — retrieved, injected, attributed
Adaptive agent routing (learns from outcomes) Yes — Bayesian Kelly Criterion
Runs 100% local, zero cloud Some models Yes — Ollama + MLX-LM
Reports honest failures Partial Yes — 0% lift reported when true
Cost $19/mo $20/mo $0–40/mo Free + API costs Free + MIT licensed

Good fit if you need to…

  • Evaluate a repo and get a real improvement plan, not a chat answer
  • Mine patterns from a folder of repos into persistent, reusable knowledge
  • Generate new apps backed by a reviewable spec
  • Validate that your agent's changes actually exist in the filesystem
  • Run everything locally with Ollama or MLX-LM
  • Let the system learn which agent is best for each task type automatically
  • Mine your own project for reusable patterns with cam mine-self
  • Batch-mine repos across directories with cam mine-workspace
  • Find knowledge gaps and discover new categories with cam gaps --discover
  • Explore the corpus with epsilon re-ranking for serendipitous retrieval

Add PULSE discovery if you also need to…

  • Discover new GitHub repos from X automatically
  • Filter discoveries against your existing knowledge
  • Auto-learn from novel repos with clone, mine, and store
  • Run a perpetual daemon that polls every 30 minutes
  • Deploy a Docker swarm with multiple scout workers
  • Let the system periodically mine its own source code for improvements

Built to Fail Honestly

Most AI tools hide failures. CAM is built for the opposite: seven hardened checkpoints where the system will tell you the truth, even when the truth is zero.

01

Rejects hallucinated success

Other agents say they updated files. CAM checks the actual diff. If nothing changed, the run fails.

02

Namespace-safe execution

In fixed-mode, CAM rejects agent output that introduces new top-level source namespaces. --namespace-safe-retry hardens retries.

03

Preflight contract system

Before risky execution, CAM asks the high-value questions, creates a reusable task contract, and blocks unsafe work.

04

Honest benchmark reporting

The benchmark harness reports "0% lift" when the corpus does not beat baseline.

05

Documented limits

CAM states what does not work yet. --execute is not presented as a guaranteed autonomous builder.

06

Full provenance tracking

Every mined methodology records source repo URL, discovery date, and license type.

07

Uncertainty-aware routing

Agents with few samples get kappa-shrinkage discounts. Kelly fractions stay near zero until real evidence accumulates. A/B test margins adapt to sample size instead of using a fixed threshold.

Roadmap

From drop-in skill to adaptive intelligence.

Phase 1
Core engine: evaluate, mine, create, validate, benchmark
Phase 2
Local-first: Docker, Ollama, MLX-LM, torch-free install
Phase 3
PULSE: X-Scout, novelty filter, auto-learning, 16/16 live scan
Phase 3.5
Self-enhancement with 7-gate validation and atomic swap
Phase 3.9
Resilience: correction loop, metric gates, freshness, secret scanning
Phase 4.0
CAM Swarm: federation, fitness feedback, configurable mining filters
Phase 4.5
Drive-Ops: 1.5TB ganglion mining, content dedup, brain federation proven at scale
Phase 5.0
Bayesian Kelly routing, adaptive A/B margins, uncertainty-aware fitness, EMA feedback
Phase 5.5
Self-awareness: seed knowledge (31 methodologies ship with install), yield-priority mining, importer search-visibility fix
Phase 5.9
Knowledge impact: A/B test (KB wins 7/8), structured JSON logging, Pydantic tool schemas, post-mine self-assessment
Phase 6.0
Web UI: 14-page Next.js frontend, Forge Builder, execution playground, gap heatmap, evolution lab, brain graph
Phase 7
Enterprise: sandbox, audit logs, webhooks
Phase 8
Community hub, fleet self-enhancement, embedding hot-swap