Commands Reference
Complete guide to all chu commands and their usage.
Quick Navigation
gt do - Autonomous Execution
The flagship copilot command. Orchestrates 4 specialized agents to autonomously complete tasks with validation and auto-retry.
How It Works
gt do "add JWT authentication"
Agent Flow:
- Analyzer - Understands codebase using dependency graph, reads relevant files
- Planner - Creates minimal implementation plan, lists files to modify
- File Validation - Extracts allowed files, blocks extras
- Editor - Executes changes ONLY on planned files
- Validator - Checks success criteria, triggers auto-retry if validation fails
Examples
gt do "fix authentication bug in login handler"
gt do "refactor error handling to use custom types"
gt do "add rate limiting to API endpoints" --supervised
gt do "optimize database queries" --interactive
Flags
--supervised- Require manual approval before implementation (critical tasks)--interactive- Prompt when model selection is ambiguous--dry-run- Show plan only, don’t execute-v/--verbose- Show model selection and agent decisions--max-attempts N- Maximum retry attempts (default: 3)
Benefits
- Automatic model selection: queries performance history and picks the best model per agent
- Auto-retry with feedback: switches to better models when validation fails
- File validation: prevents creating unintended files or modifying wrong code
- Success criteria: verifies task completion before finishing
- Cost optimized: uses cheaper models for routing, better models for editing
Setup Commands
gt setup
Initialize GPTCode configuration at ~/.gptcode.
gt setup
Creates:
~/.gptcode/profile.yaml– backend and model configuration~/.gptcode/system_prompt.md– base system prompt~/.gptcode/memories.jsonl– memory store for examples
gptcode key [backend]
Add or update API key for a backend provider.
gptcode key openrouter
gptcode key groq
gt models update
Update model catalog from available providers (OpenRouter, Groq, OpenAI, etc.).
gt models update
Interactive Modes
gt chat
Code-focused conversation mode. Routes queries to appropriate agents based on intent.
gt chat
gt chat "explain how authentication works"
echo "list go files" | gt chat
Agent routing:
query– read/understand codeedit– modify coderesearch– external informationtest– run tests or commandsreview– code review and critique
gt tdd
Incremental TDD mode. Generates tests first, then implementation.
gt tdd
gt tdd "slugify function with unicode support"
Workflow:
- Clarify requirements
- Generate tests
- Generate implementation
- Iterate and refine
Workflow Commands (Research → Plan → Implement)
gt research [question]
Document codebase and understand architecture.
gt research "How does authentication work?"
gt research "Explain the payment flow"
Creates a research document with findings and analysis.
gt plan [task]
Create detailed implementation plan with phases.
gt plan "Add user authentication"
gt plan "Implement webhook system"
Generates:
- Problem statement
- Current state analysis
- Proposed changes with phases
- Saved to
~/.gptcode/plans/
gptcode implement <plan_file>
Execute an approved plan phase-by-phase with verification.
gptcode implement ~/.gptcode/plans/2025-01-15-add-auth.md
Each phase:
- Implemented
- Verified (tests run)
- User confirms before next phase
Code Quality
gt review [target]
NEW: Review code for bugs, security issues, and improvements against coding standards.
gt review main.go
gt review ./src
gt review .
gt review internal/agents/ --focus security
Options:
--focus/-f– Focus area (security, performance, error handling)
Reviews against standards:
- Naming conventions (Clean Code, Code Complete)
- Language-specific best practices
- TDD principles
- Error handling and edge cases
Output structure:
- Summary: Overall assessment
- Critical Issues: Must-fix bugs or security risks
- Suggestions: Quality/performance improvements
- Nitpicks: Style, naming preferences
Examples:
gt review main.go --focus "error handling"
gt review . --focus performance
gt review src/auth --focus security
Feature Generation
gt feature [description]
Generate tests + implementation with auto-detected language.
gt feature "slugify with unicode support and max length"
Supported languages:
- Elixir (mix.exs)
- Ruby (Gemfile)
- Go (go.mod)
- TypeScript (package.json)
- Python (requirements.txt)
- Rust (Cargo.toml)
Execution Mode
gt run [task]
Execute tasks with follow-up support. Two modes available:
1. AI-assisted mode (default when no args provided):
gt run # Start interactive session
gt run "deploy to staging" --once # Single AI execution
2. Direct REPL mode with command history:
gt run --raw # Interactive command execution
gt run "docker ps" --raw # Execute command and exit
AI-Assisted Mode
Provides intelligent command suggestions and execution:
- Command history tracking
- Output reference ($1, $2, $last)
- Directory and environment management
- Context preservation across commands
gt run
> deploy to staging
[AI executes fly deploy command]
> check if it's running
[AI executes status check]
> roll back if there are errors
[AI conditionally executes rollback]
Direct REPL Mode
Direct shell command execution with enhanced features:
gt run --raw
> ls -la
> cat $1 # Reference previous command
> /history # Show command history
> /output 1 # Show output of command 1
> /cd /tmp # Change directory
> /env MY_VAR=value # Set environment variable
> /exit # Exit REPL
REPL Commands:
/exit,/quit- Exit run session/help- Show available commands/history- Show command history/output <id>- Show output of previous command/cd <dir>- Change working directory/env [key[=value]]- Show/set environment variables
Command References:
$last- Reference the last command$1,$2, … - Reference command by ID
Examples
# AI-assisted operational tasks
gt run "check postgres status"
gt run "make GET request to api.github.com/users/octocat"
# Direct command REPL for DevOps
gt run --raw
> docker ps
> docker logs $1 # Reference container from previous output
> /cd /var/log
> tail -f app.log
# Single-shot with piped input
echo "deploy to production" | gt run --once
Flags
--raw- Use direct command REPL mode (no AI)--once- Force single-shot mode (backwards compatible)
Perfect for operational tasks, DevOps workflows, and command execution with history.
Machine Learning Commands
gt ml list
List available ML models.
gt ml list
Shows:
- Model name
- Description
- Location
- Status (trained/not trained)
gt ml train <model>
Train an ML model using Python.
gt ml train complexity
gt ml train intent
Available models:
complexity– Task complexity classifier (simple/complex/multistep)intent– Intent classifier (query/editor/research/review)
Requirements:
- Python 3.8+
- Will create venv and install dependencies automatically
gt ml test <model> [query]
Test a trained model with a query.
gt ml test complexity "implement oauth"
gt ml test intent "explain this code"
Shows prediction and probabilities for all classes.
gt ml eval <model> [-f file]
Evaluate model performance on test dataset.
gt ml eval complexity
gt ml eval intent -f ml/intent/data/eval.csv
Shows:
- Accuracy
- Precision/Recall/F1 per class
- Confusion matrix
- Low-confidence predictions
gt ml predict [model] <text>
Make prediction using embedded Go model (no Python runtime).
gt ml predict "implement auth" # uses complexity (default)
gt ml predict complexity "fix typo" # explicit model
gt ml predict intent "explain this code" # intent classification
Fast path:
- 1ms inference (vs 500ms LLM)
- Zero API cost
- Pure Go, no Python runtime
ML Configuration
Complexity Threshold
Controls when Guided Mode is automatically activated.
# View current threshold (default: 0.55)
gt config get defaults.ml_complex_threshold
# Set threshold (0.0-1.0)
gt config set defaults.ml_complex_threshold 0.6
Higher threshold = less sensitive (fewer Guided Mode triggers)
Intent Threshold
Controls when ML router is used instead of LLM.
# View current threshold (default: 0.7)
gt config get defaults.ml_intent_threshold
# Set threshold (0.0-1.0)
gt config set defaults.ml_intent_threshold 0.8
Higher threshold = more LLM fallbacks (more accurate but slower/expensive)
Dependency Graph Commands
gt graph build
Force rebuild dependency graph, ignoring cache.
gt graph build
Shows:
- Number of nodes (files)
- Number of edges (dependencies)
- Build time
When to use:
- After major refactoring
- After adding/removing many files
- If cache seems stale
gt graph query <terms>
Find relevant files for a query term using PageRank.
gt graph query "authentication"
gt graph query "database connection"
gt graph query "api routes"
Shows:
- Matching files ranked by importance
- PageRank scores
- Why each file was selected
How it works:
- Keyword matching in file paths
- Neighbor expansion (imports/imported-by)
- PageRank weighting
- Top N selection
Graph Configuration
Max Files
Control how many files are added to context in chat mode.
# View current setting (default: 5)
gt config get defaults.graph_max_files
# Set max files (1-20)
gt config set defaults.graph_max_files 8
Recommendations:
- Small projects (<50 files): 3-5
- Medium projects (50-500 files): 5-8
- Large projects (500+ files): 8-12
Debug Graph
export GPTCODE_DEBUG=1
gt chat "your query" # Shows graph stats
Shows:
- Nodes/edges count
- Selected files
- PageRank scores
- Build time (cache hit/miss)
Command Comparison
| Command | Purpose | When to Use |
|---|---|---|
chat |
Interactive conversation | Quick questions, exploratory work |
review |
Code review | Before commit, quality check, security audit |
tdd |
TDD workflow | New features requiring tests |
research |
Understand codebase | Architecture analysis, onboarding |
plan |
Create implementation plan | Large features, complex changes |
implement |
Execute plan | Structured feature implementation |
feature |
Quick feature generation | Small, focused features |
run |
Execute tasks | DevOps, HTTP requests, CLI commands |
Backend Management
gt backend
Show current backend.
gt backend
Shows:
- Backend name
- Type (openai/ollama)
- Base URL
- Default model
gt backend list
List all configured backends.
gt backend list
gt backend show [name]
Show backend configuration. Shows current if no name provided.
gt backend show groq
Shows:
- Type and URL
- Default model
- All configured models
gt backend use <name>
Switch to a backend.
gt backend use groq
gt backend use openrouter
gt backend use ollama
gt backend create
Create a new backend.
gt backend create mygroq openai https://api.groq.com/openai/v1
gptcode key mygroq # Set API key
gt config set backend.mygroq.default_model llama-3.3-70b-versatile
gt backend use mygroq
gt backend delete
Delete a backend.
gt backend delete mygroq
Profile Management
gt profile
Show current profile.
gt profile
Shows:
- Backend and profile name
- Agent models (router, query, editor, research)
gt profile list [backend]
List all profiles. Optionally filter by backend.
gt profile list # All profiles
gt profile list groq # Only groq profiles
Shows profiles in backend.profile format.
gt profile show [backend.profile]
Show profile configuration. Shows current if not specified.
gt profile show groq.speed
gt profile show # Current profile
gt profile use <backend>.<profile>
Switch to a backend and profile in one command.
gt profile use groq.speed
gt profile use openrouter.free
gt profile use ollama.local
Benefits:
- Faster than switching backend and profile separately
- Atomic operation (both or neither)
- Easier to remember
Advanced Profile Commands
For creating and configuring profiles, use gt profiles (plural):
# Create new profile
gt profiles create groq speed
# Configure agents
gt profiles set-agent groq speed router llama-3.1-8b-instant
gt profiles set-agent groq speed query llama-3.1-8b-instant
gt profiles set-agent groq speed editor llama-3.1-8b-instant
gt profiles set-agent groq speed research llama-3.1-8b-instant
Environment Variables
GPTCODE_DEBUG
Enable debug output to stderr.
GPTCODE_DEBUG=1 gt chat
Shows:
- Agent routing decisions
- Iteration counts
- Tool execution details
Configuration
All configuration lives in ~/.gptcode/:
~/.gptcode/
├── profile.yaml # Backend and model settings
├── system_prompt.md # Base system prompt
├── memories.jsonl # Example memory store
└── plans/ # Saved implementation plans
└── 2025-01-15-add-auth.md
Example profile.yaml
defaults:
backend: groq
model: fast
backends:
groq:
type: chat_completion
base_url: https://api.groq.com/openai/v1
default_model: llama-3.3-70b-versatile
models:
fast: llama-3.3-70b-versatile
smart: llama-3.3-70b-specdec
Advanced Configuration
Direct Config Manipulation
For advanced users who need direct access to configuration values:
# Get configuration value
gt config get defaults.backend
gt config get defaults.profile
gt config get backend.groq.default_model
# Set configuration value
gt config set defaults.backend groq
gt config set defaults.profile speed
gt config set backend.groq.default_model llama-3.3-70b-versatile
Note: For most use cases, prefer the user-friendly commands:
gt backend useinstead ofgt config set defaults.backendgt profile useinstead ofgt config set defaults.profile
Next Steps
- See Research Mode for workflow details
- See Plan Mode for plan structure