Commands Reference

Complete guide to all chu commands and their usage.

Quick Navigation

Setup Interactive Workflow Review Features Run ML Graph Config

gt do - Autonomous Execution

The flagship copilot command. Orchestrates 4 specialized agents to autonomously complete tasks with validation and auto-retry.

How It Works

gt do "add JWT authentication"

Agent Flow:

  1. Analyzer - Understands codebase using dependency graph, reads relevant files
  2. Planner - Creates minimal implementation plan, lists files to modify
  3. File Validation - Extracts allowed files, blocks extras
  4. Editor - Executes changes ONLY on planned files
  5. Validator - Checks success criteria, triggers auto-retry if validation fails

Examples

gt do "fix authentication bug in login handler"
gt do "refactor error handling to use custom types"
gt do "add rate limiting to API endpoints" --supervised
gt do "optimize database queries" --interactive

Flags

Benefits

See full agent architecture →


Setup Commands

gt setup

Initialize GPTCode configuration at ~/.gptcode.

gt setup

Creates:

gptcode key [backend]

Add or update API key for a backend provider.

gptcode key openrouter
gptcode key groq

gt models update

Update model catalog from available providers (OpenRouter, Groq, OpenAI, etc.).

gt models update

Interactive Modes

gt chat

Code-focused conversation mode. Routes queries to appropriate agents based on intent.

gt chat
gt chat "explain how authentication works"
echo "list go files" | gt chat

Agent routing:

gt tdd

Incremental TDD mode. Generates tests first, then implementation.

gt tdd
gt tdd "slugify function with unicode support"

Workflow:

  1. Clarify requirements
  2. Generate tests
  3. Generate implementation
  4. Iterate and refine

Workflow Commands (Research → Plan → Implement)

gt research [question]

Document codebase and understand architecture.

gt research "How does authentication work?"
gt research "Explain the payment flow"

Creates a research document with findings and analysis.

gt plan [task]

Create detailed implementation plan with phases.

gt plan "Add user authentication"
gt plan "Implement webhook system"

Generates:

gptcode implement <plan_file>

Execute an approved plan phase-by-phase with verification.

gptcode implement ~/.gptcode/plans/2025-01-15-add-auth.md

Each phase:

  1. Implemented
  2. Verified (tests run)
  3. User confirms before next phase

Code Quality

gt review [target]

NEW: Review code for bugs, security issues, and improvements against coding standards.

gt review main.go
gt review ./src
gt review .
gt review internal/agents/ --focus security

Options:

Reviews against standards:

Output structure:

  1. Summary: Overall assessment
  2. Critical Issues: Must-fix bugs or security risks
  3. Suggestions: Quality/performance improvements
  4. Nitpicks: Style, naming preferences

Examples:

gt review main.go --focus "error handling"
gt review . --focus performance
gt review src/auth --focus security

Feature Generation

gt feature [description]

Generate tests + implementation with auto-detected language.

gt feature "slugify with unicode support and max length"

Supported languages:


Execution Mode

gt run [task]

Execute tasks with follow-up support. Two modes available:

1. AI-assisted mode (default when no args provided):

gt run                                    # Start interactive session
gt run "deploy to staging" --once         # Single AI execution

2. Direct REPL mode with command history:

gt run --raw                              # Interactive command execution
gt run "docker ps" --raw                  # Execute command and exit

AI-Assisted Mode

Provides intelligent command suggestions and execution:

gt run
> deploy to staging
[AI executes fly deploy command]
> check if it's running
[AI executes status check]
> roll back if there are errors
[AI conditionally executes rollback]

Direct REPL Mode

Direct shell command execution with enhanced features:

gt run --raw
> ls -la
> cat $1                    # Reference previous command
> /history                  # Show command history
> /output 1                 # Show output of command 1
> /cd /tmp                  # Change directory
> /env MY_VAR=value         # Set environment variable
> /exit                     # Exit REPL

REPL Commands:

Command References:

Examples

# AI-assisted operational tasks
gt run "check postgres status"
gt run "make GET request to api.github.com/users/octocat"

# Direct command REPL for DevOps
gt run --raw
> docker ps
> docker logs $1            # Reference container from previous output
> /cd /var/log
> tail -f app.log

# Single-shot with piped input
echo "deploy to production" | gt run --once

Flags

Perfect for operational tasks, DevOps workflows, and command execution with history.


Machine Learning Commands

gt ml list

List available ML models.

gt ml list

Shows:

gt ml train <model>

Train an ML model using Python.

gt ml train complexity
gt ml train intent

Available models:

Requirements:

gt ml test <model> [query]

Test a trained model with a query.

gt ml test complexity "implement oauth"
gt ml test intent "explain this code"

Shows prediction and probabilities for all classes.

gt ml eval <model> [-f file]

Evaluate model performance on test dataset.

gt ml eval complexity
gt ml eval intent -f ml/intent/data/eval.csv

Shows:

gt ml predict [model] <text>

Make prediction using embedded Go model (no Python runtime).

gt ml predict "implement auth"               # uses complexity (default)
gt ml predict complexity "fix typo"          # explicit model
gt ml predict intent "explain this code"     # intent classification

Fast path:


ML Configuration

Complexity Threshold

Controls when Guided Mode is automatically activated.

# View current threshold (default: 0.55)
gt config get defaults.ml_complex_threshold

# Set threshold (0.0-1.0)
gt config set defaults.ml_complex_threshold 0.6

Higher threshold = less sensitive (fewer Guided Mode triggers)

Intent Threshold

Controls when ML router is used instead of LLM.

# View current threshold (default: 0.7)
gt config get defaults.ml_intent_threshold

# Set threshold (0.0-1.0)
gt config set defaults.ml_intent_threshold 0.8

Higher threshold = more LLM fallbacks (more accurate but slower/expensive)


Dependency Graph Commands

gt graph build

Force rebuild dependency graph, ignoring cache.

gt graph build

Shows:

When to use:

gt graph query <terms>

Find relevant files for a query term using PageRank.

gt graph query "authentication"
gt graph query "database connection"
gt graph query "api routes"

Shows:

How it works:

  1. Keyword matching in file paths
  2. Neighbor expansion (imports/imported-by)
  3. PageRank weighting
  4. Top N selection

Graph Configuration

Max Files

Control how many files are added to context in chat mode.

# View current setting (default: 5)
gt config get defaults.graph_max_files

# Set max files (1-20)
gt config set defaults.graph_max_files 8

Recommendations:

Debug Graph

export GPTCODE_DEBUG=1
gt chat "your query"  # Shows graph stats

Shows:


Command Comparison

Command Purpose When to Use
chat Interactive conversation Quick questions, exploratory work
review Code review Before commit, quality check, security audit
tdd TDD workflow New features requiring tests
research Understand codebase Architecture analysis, onboarding
plan Create implementation plan Large features, complex changes
implement Execute plan Structured feature implementation
feature Quick feature generation Small, focused features
run Execute tasks DevOps, HTTP requests, CLI commands

Backend Management

gt backend

Show current backend.

gt backend

Shows:

gt backend list

List all configured backends.

gt backend list

gt backend show [name]

Show backend configuration. Shows current if no name provided.

gt backend show groq

Shows:

gt backend use <name>

Switch to a backend.

gt backend use groq
gt backend use openrouter
gt backend use ollama

gt backend create

Create a new backend.

gt backend create mygroq openai https://api.groq.com/openai/v1
gptcode key mygroq  # Set API key
gt config set backend.mygroq.default_model llama-3.3-70b-versatile
gt backend use mygroq

gt backend delete

Delete a backend.

gt backend delete mygroq

Profile Management

gt profile

Show current profile.

gt profile

Shows:

gt profile list [backend]

List all profiles. Optionally filter by backend.

gt profile list              # All profiles
gt profile list groq        # Only groq profiles

Shows profiles in backend.profile format.

gt profile show [backend.profile]

Show profile configuration. Shows current if not specified.

gt profile show groq.speed
gt profile show              # Current profile

gt profile use <backend>.<profile>

Switch to a backend and profile in one command.

gt profile use groq.speed
gt profile use openrouter.free
gt profile use ollama.local

Benefits:

Advanced Profile Commands

For creating and configuring profiles, use gt profiles (plural):

# Create new profile
gt profiles create groq speed

# Configure agents
gt profiles set-agent groq speed router llama-3.1-8b-instant
gt profiles set-agent groq speed query llama-3.1-8b-instant
gt profiles set-agent groq speed editor llama-3.1-8b-instant
gt profiles set-agent groq speed research llama-3.1-8b-instant

Environment Variables

GPTCODE_DEBUG

Enable debug output to stderr.

GPTCODE_DEBUG=1 gt chat

Shows:


Configuration

All configuration lives in ~/.gptcode/:

~/.gptcode/
├── profile.yaml          # Backend and model settings
├── system_prompt.md      # Base system prompt
├── memories.jsonl        # Example memory store
└── plans/               # Saved implementation plans
    └── 2025-01-15-add-auth.md

Example profile.yaml

defaults:
  backend: groq
  model: fast

backends:
  groq:
    type: chat_completion
    base_url: https://api.groq.com/openai/v1
    default_model: llama-3.3-70b-versatile
    models:
      fast: llama-3.3-70b-versatile
      smart: llama-3.3-70b-specdec

Advanced Configuration

Direct Config Manipulation

For advanced users who need direct access to configuration values:

# Get configuration value
gt config get defaults.backend
gt config get defaults.profile
gt config get backend.groq.default_model

# Set configuration value
gt config set defaults.backend groq
gt config set defaults.profile speed
gt config set backend.groq.default_model llama-3.3-70b-versatile

Note: For most use cases, prefer the user-friendly commands:


Next Steps