Commands Reference

Complete guide to all chu commands and their usage.

Quick Navigation

Setup Interactive Workflow Review Features Run ML Graph Config

gptcode do - Autonomous Execution

The flagship copilot command. Orchestrates 4 specialized agents to autonomously complete tasks with validation and auto-retry.

How It Works

gptcode do "add JWT authentication"

Agent Flow:

  1. Analyzer - Understands codebase using dependency graph, reads relevant files
  2. Planner - Creates minimal implementation plan, lists files to modify
  3. File Validation - Extracts allowed files, blocks extras
  4. Editor - Executes changes ONLY on planned files
  5. Validator - Checks success criteria, triggers auto-retry if validation fails

Examples

gptcode do "fix authentication bug in login handler"
gptcode do "refactor error handling to use custom types"
gptcode do "add rate limiting to API endpoints" --supervised
gptcode do "optimize database queries" --interactive

Flags

Benefits

See full agent architecture →


Setup Commands

gptcode setup

Initialize GPTCode configuration at ~/.gptcode.

gptcode setup

Creates:

gptcode key [backend]

Add or update API key for a backend provider.

gptcode key openrouter
gptcode key groq

gptcode models update

Update model catalog from available providers (OpenRouter, Groq, OpenAI, etc.).

gptcode models update

Interactive Modes

gptcode chat

Code-focused conversation mode. Routes queries to appropriate agents based on intent.

gptcode chat
gptcode chat "explain how authentication works"
echo "list go files" | gptcode chat

Agent routing:

gptcode tdd

Incremental TDD mode. Generates tests first, then implementation.

gptcode tdd
gptcode tdd "slugify function with unicode support"

Workflow:

  1. Clarify requirements
  2. Generate tests
  3. Generate implementation
  4. Iterate and refine

Workflow Commands (Research → Plan → Implement)

gptcode research [question]

Document codebase and understand architecture.

gptcode research "How does authentication work?"
gptcode research "Explain the payment flow"

Creates a research document with findings and analysis.

gptcode plan [task]

Create detailed implementation plan with phases.

gptcode plan "Add user authentication"
gptcode plan "Implement webhook system"

Generates:

gptcode implement <plan_file>

Execute an approved plan phase-by-phase with verification.

gptcode implement ~/.gptcode/plans/2025-01-15-add-auth.md

Each phase:

  1. Implemented
  2. Verified (tests run)
  3. User confirms before next phase

Code Quality

gptcode review [target]

NEW: Review code for bugs, security issues, and improvements against coding standards.

gptcode review main.go
gptcode review ./src
gptcode review .
gptcode review internal/agents/ --focus security

Options:

Reviews against standards:

Output structure:

  1. Summary: Overall assessment
  2. Critical Issues: Must-fix bugs or security risks
  3. Suggestions: Quality/performance improvements
  4. Nitpicks: Style, naming preferences

Examples:

gptcode review main.go --focus "error handling"
gptcode review . --focus performance
gptcode review src/auth --focus security

Feature Generation

gptcode feature [description]

Generate tests + implementation with auto-detected language.

gptcode feature "slugify with unicode support and max length"

Supported languages:


Execution Mode

gptcode run [task]

Execute tasks with follow-up support. Two modes available:

1. AI-assisted mode (default when no args provided):

gptcode run                                    # Start interactive session
gptcode run "deploy to staging" --once         # Single AI execution

2. Direct REPL mode with command history:

gptcode run --raw                              # Interactive command execution
gptcode run "docker ps" --raw                  # Execute command and exit

AI-Assisted Mode

Provides intelligent command suggestions and execution:

gptcode run
> deploy to staging
[AI executes fly deploy command]
> check if it's running
[AI executes status check]
> roll back if there are errors
[AI conditionally executes rollback]

Direct REPL Mode

Direct shell command execution with enhanced features:

gptcode run --raw
> ls -la
> cat $1                    # Reference previous command
> /history                  # Show command history
> /output 1                 # Show output of command 1
> /cd /tmp                  # Change directory
> /env MY_VAR=value         # Set environment variable
> /exit                     # Exit REPL

REPL Commands:

Command References:

Examples

# AI-assisted operational tasks
gptcode run "check postgres status"
gptcode run "make GET request to api.github.com/users/octocat"

# Direct command REPL for DevOps
gptcode run --raw
> docker ps
> docker logs $1            # Reference container from previous output
> /cd /var/log
> tail -f app.log

# Single-shot with piped input
echo "deploy to production" | gptcode run --once

Flags

Perfect for operational tasks, DevOps workflows, and command execution with history.


Machine Learning Commands

gptcode ml list

List available ML models.

gptcode ml list

Shows:

gptcode ml train <model>

Train an ML model using Python.

gptcode ml train complexity
gptcode ml train intent

Available models:

Requirements:

gptcode ml test <model> [query]

Test a trained model with a query.

gptcode ml test complexity "implement oauth"
gptcode ml test intent "explain this code"

Shows prediction and probabilities for all classes.

gptcode ml eval <model> [-f file]

Evaluate model performance on test dataset.

gptcode ml eval complexity
gptcode ml eval intent -f ml/intent/data/eval.csv

Shows:

gptcode ml predict [model] <text>

Make prediction using embedded Go model (no Python runtime).

gptcode ml predict "implement auth"               # uses complexity (default)
gptcode ml predict complexity "fix typo"          # explicit model
gptcode ml predict intent "explain this code"     # intent classification

Fast path:


ML Configuration

Complexity Threshold

Controls when Guided Mode is automatically activated.

# View current threshold (default: 0.55)
gptcode config get defaults.ml_complex_threshold

# Set threshold (0.0-1.0)
gptcode config set defaults.ml_complex_threshold 0.6

Higher threshold = less sensitive (fewer Guided Mode triggers)

Intent Threshold

Controls when ML router is used instead of LLM.

# View current threshold (default: 0.7)
gptcode config get defaults.ml_intent_threshold

# Set threshold (0.0-1.0)
gptcode config set defaults.ml_intent_threshold 0.8

Higher threshold = more LLM fallbacks (more accurate but slower/expensive)


Dependency Graph Commands

gptcode graph build

Force rebuild dependency graph, ignoring cache.

gptcode graph build

Shows:

When to use:

gptcode graph query <terms>

Find relevant files for a query term using PageRank.

gptcode graph query "authentication"
gptcode graph query "database connection"
gptcode graph query "api routes"

Shows:

How it works:

  1. Keyword matching in file paths
  2. Neighbor expansion (imports/imported-by)
  3. PageRank weighting
  4. Top N selection

Graph Configuration

Max Files

Control how many files are added to context in chat mode.

# View current setting (default: 5)
gptcode config get defaults.graph_max_files

# Set max files (1-20)
gptcode config set defaults.graph_max_files 8

Recommendations:

Debug Graph

export GPTCODE_DEBUG=1
gptcode chat "your query"  # Shows graph stats

Shows:


Command Comparison

Command Purpose When to Use
chat Interactive conversation Quick questions, exploratory work
review Code review Before commit, quality check, security audit
tdd TDD workflow New features requiring tests
research Understand codebase Architecture analysis, onboarding
plan Create implementation plan Large features, complex changes
implement Execute plan Structured feature implementation
feature Quick feature generation Small, focused features
run Execute tasks DevOps, HTTP requests, CLI commands

Backend Management

gptcode backend

Show current backend.

gptcode backend

Shows:

gptcode backend list

List all configured backends.

gptcode backend list

gptcode backend show [name]

Show backend configuration. Shows current if no name provided.

gptcode backend show groq

Shows:

gptcode backend use <name>

Switch to a backend.

gptcode backend use groq
gptcode backend use openrouter
gptcode backend use ollama

gptcode backend create

Create a new backend.

gptcode backend create mygroq openai https://api.groq.com/openai/v1
gptcode key mygroq  # Set API key
gptcode config set backend.mygroq.default_model llama-3.3-70b-versatile
gptcode backend use mygroq

gptcode backend delete

Delete a backend.

gptcode backend delete mygroq

Profile Management

gptcode profile

Show current profile.

gptcode profile

Shows:

gptcode profile list [backend]

List all profiles. Optionally filter by backend.

gptcode profile list              # All profiles
gptcode profile list groq        # Only groq profiles

Shows profiles in backend.profile format.

gptcode profile show [backend.profile]

Show profile configuration. Shows current if not specified.

gptcode profile show groq.speed
gptcode profile show              # Current profile

gptcode profile use <backend>.<profile>

Switch to a backend and profile in one command.

gptcode profile use groq.speed
gptcode profile use openrouter.free
gptcode profile use ollama.local

Benefits:

Advanced Profile Commands

For creating and configuring profiles, use gptcode profiles (plural):

# Create new profile
gptcode profiles create groq speed

# Configure agents
gptcode profiles set-agent groq speed router llama-3.1-8b-instant
gptcode profiles set-agent groq speed query llama-3.1-8b-instant
gptcode profiles set-agent groq speed editor llama-3.1-8b-instant
gptcode profiles set-agent groq speed research llama-3.1-8b-instant

Environment Variables

GPTCODE_DEBUG

Enable debug output to stderr.

GPTCODE_DEBUG=1 gptcode chat

Shows:


Configuration

All configuration lives in ~/.gptcode/:

~/.gptcode/
├── profile.yaml          # Backend and model settings
├── system_prompt.md      # Base system prompt
├── memories.jsonl        # Example memory store
└── plans/               # Saved implementation plans
    └── 2025-01-15-add-auth.md

Example profile.yaml

defaults:
  backend: groq
  model: fast

backends:
  groq:
    type: chat_completion
    base_url: https://api.groq.com/openai/v1
    default_model: llama-3.3-70b-versatile
    models:
      fast: llama-3.3-70b-versatile
      smart: llama-3.3-70b-specdec

Advanced Configuration

Direct Config Manipulation

For advanced users who need direct access to configuration values:

# Get configuration value
gptcode config get defaults.backend
gptcode config get defaults.profile
gptcode config get backend.groq.default_model

# Set configuration value
gptcode config set defaults.backend groq
gptcode config set defaults.profile speed
gptcode config set backend.groq.default_model llama-3.3-70b-versatile

Note: For most use cases, prefer the user-friendly commands:


Next Steps