AI-Powered
Product Engineering Assistant
Beyond code generation — gt ships production-ready features with design systems, tracking pixels, health checks, and quality guardrails built-in. Bring Your Own Keys. Run with Groq, OpenAI, or locally with Ollama.
Input Analysis · Transformers
Watch how the transformer processes your command through self-attention to generate intent predictions.
Intelligence Routing
Based on the intent classification, the system routes to the appropriate execution mode.
(Speed)
(Logic)
Agent Execution · Orchestrated
Multiple agents collaborate sequentially to analyze, plan, edit, and validate changes.
Response Generation · Transformers
The final response is decoded from execution context using the transformer.
Autonomous Execution
gt do "task" orchestrates 4 specialized agents: Analyzer → Planner → Editor → Validator. Auto-retry with model switching when validation fails.
Validation & Safety
File validation prevents creating unintended files. Success criteria auto-verified. No surprise scripts or configs. Supervised vs autonomous modes.
Intelligent Context
Dependency graph + PageRank identifies relevant files. 5x token reduction (100k → 20k). ML intent classification (1ms routing).
Radically Affordable
$0-5/month vs $20-30/month subscriptions. Use Groq for speed, Ollama for free. Auto-selects best models per agent.
Interactive Modes
gt chat for conversations. gt run for tasks with follow-up. Context-aware from CLI or Neovim plugin.
Manual Workflow
Break down complex tasks: gt research → gt plan → gptcode implement. Full control when you need it.
🎯 15 Product Skills
Beyond code: Design Systems, Product Metrics, Production Guardrails, QA Automation. Language idioms + product best practices built into every prompt.
Agent Orchestration: Analyzer → Planner → Editor → Validator
Fast routing, focused context, safe edits, and verified results — end to end
Analyzer
Scans the codebase, builds dependency graph and selects only the relevant files
gt do "add authentication"
Planner
Creates a concrete plan with success criteria and allowed file list
gt do "add authentication"
Editor
Applies changes incrementally with file validation and auto-recovery
gt do "add authentication"
Validator
Runs tests and checks success criteria before finishing
gt do "add authentication"
Quick Start
1. Install CLI
# One-line install (creates gptcode and gt commands)
curl -sSL https://gptcode.dev/install.sh | bash
# Or using go install
go install github.com/gptcode-cloud/cli/cmd/gptcode@latest
# Setup
gt setup
2. Add Neovim Plugin
-- lazy.nvim
{
dir = "~/workspace/gptcode/neovim",
config = function()
require("gptcode").setup()
end,
keys = {
{ "<C-d>", "<cmd>GPTCodeChat<cr>", desc = "Toggle Chat" },
{ "<C-m>", "<cmd>GPTCodeModels<cr>", desc = "Profiles" },
}
}
3. Start Coding
# Use gptcode or the short alias gt
gt do "add user authentication with JWT"
gt chat
gt research "best practices for error handling"
gt plan "implement rate limiting"
Core Capabilities
Three Ways to Work
- Autonomous Copilot:
gt do "task"handles everything—analysis, planning, execution, validation - Interactive Chat:
gt chatfor conversations with context awareness and follow-ups - Structured Workflow:
gt research→gt plan→gptcode implementfor full control
Special Modes
- TDD Mode:
gt tddfor test-driven development workflow - Code Review:
gt reviewfor automated bug detection and security analysis - Task Execution:
gt runfor tasks with follow-up conversations - Web Research: Built-in search and documentation lookup
Intelligence & Optimization
- Multi-Agent Architecture: Router, Query, Editor, Research agents working together
- ML-Powered: Intent classification (1ms) and complexity detection with zero API calls
- Dependency Graph: Smart context selection with 5x token reduction (Go, Python, JS/TS, Ruby, Rust)
- Cost Optimized: Mix cheap/free models per agent ($0-5/month vs $20-30/month)
Developer Experience
- Profile Management: Switch between cost/speed/quality configurations instantly
- Model Flexibility: Groq, Ollama, OpenRouter, OpenAI, Anthropic, DeepSeek
- Neovim Integration: Floating chat, model search (300+ models), LSP/Tree-sitter aware
- Validation & Safety: File validation, success criteria, supervised mode
Why GPTCode?
GPTCode isn't trying to beat Cursor or Copilot. It's trying to be different—and yours.
- Transparent: When it breaks, you can read and fix the code
- Hackable: Don't like something? Change it—it's just Go
- Model agnostic: Switch LLMs in 2 minutes (Groq, Ollama, OpenAI, etc.)
- Honest: E2E tests at 55%—no "95% accuracy" marketing
- Affordable: $2–5/month (Groq) or $0/month (Ollama)
Read the full positioning →
· Original vision →
Agent routing vs tool search →
· Intelligent model selection →
Dependency graph →
· Chat REPL →
🆕 Universal Context Management →
Documentation
- Commands Reference – Complete CLI command guide
- Research Mode – Web search and documentation lookup
- Plan Mode – Planning and implementation workflow
- ML Features – Intent classification and complexity detection
- Compare Models – Interactive model comparison tool
- Blog – Configuration guides and best practices