A Bold Claim
I might have built the fastest ticketing system on Earth. This isn’t marketing hyperbole. It’s physics.The Speed of Light Problem
JIRA, Linear, Trello, Asana, and every other cloud-based ticketing system share a constraint: the speed of light. When you click “create issue” in JIRA, here’s what happens:- Your browser sends a request to Atlassian’s servers
- The request travels through multiple network hops
- A load balancer routes it to an application server
- The app server queries a database
- The database returns results
- The app server processes the response
- The response travels back through the internet
- Your browser renders the result
The speed of light in fiber optic cable is about 200,000 km/s. A round trip
from San Francisco to an AWS data center in Virginia (~4,000 km) takes a
minimum of 40ms just for the physics. Add TCP handshakes, TLS negotiation, and
server processing… you get the idea.
The Local-First Advantage
ticket-rs doesn’t have this problem. When you runtk list:
- Read files from disk (~0.1ms)
- Parse YAML frontmatter (~2ms)
- Format output (~0.5ms)
- Done
Cloud Ticketing
~500ms-2s per operationLimited by network latency, server load, and database queries
ticket-rs
~7-21ms per operationLimited only by disk I/O and CPU speed
But Wait, I Have Benchmarks
Now, I can’t easily benchmark JIRA (their servers, their rules). But I can benchmark against the local tools that inspired ticket-rs:- ticket — A bash implementation
- beads — Steve Yegge’s Go implementation with SQLite daemon
- kardianos-ticket — A Go implementation with trie-based YAML parsing
- vibe-ticket — A Rust implementation
The Results
| Implementation | Median Time | Speedup |
|---|---|---|
| ticket-rs (Rust CLI (tk)) | 9.0ms | 6.2x faster |
| kardianos/ticket (Go trie YAML) | 9.7ms | 5.8x faster |
| nwiizo/vibe-ticket (Rust (archived)) | 16.2ms | 3.5x faster |
| ticket-py (Python bindings via PyO3) | 24.3ms | 2.3x faster |
| wedow/ticket (Bash) | 35.1ms | 1.6x faster |
| steveyegge/beads (Go daemon) | 55.9ms | 1.0x faster |
| steveyegge/beads (Go direct) | 58.8ms | 1.1x slower |
Benchmarks run with 500 tickets, 30 iterations.
Data source: benchmark-data.json
Scaling Analysis
We tested with datasets from 10 to 1,000 tickets: ticket-rs: O(n) scaling with excellent constants bash: O(n²) for operations requiring full repository scans beads daemon: Constant daemon overhead + O(n) parsing At 1,000 tickets, the performance gap widens further. At 10,000 tickets, bash becomes unusable while ticket-rs stays snappy.The “Fastest on Earth” Claim
Okay, let’s be precise about what I’m claiming: For the specific use case of:- Local-first issue tracking
- CLI-based operations
- AI agent workflows
- Dependency-aware prioritization
- Dependency graphs
- PageRank-based prioritization
- BM25 search
- Bidirectional sync with Linear/GitHub
- AI-native command output
If you know of a faster ticketing system, please open an
issue. I genuinely want to know.
I’ll update the benchmarks.
Why This Matters
Speed matters for developer experience, but it matters even more for coding agents. When Claude Code or Cursor runstk triage in an agentic loop, every millisecond of latency slows the entire cycle. A 500ms API call means the agent sits idle. Across dozens of tool calls per session, that latency compounds.
At 14-21ms per operation, ticket-rs lets coding agents:
- Query project state on every iteration without bottlenecking
- Run
tk primeat session start for instant context engineering - Iterate on dependency graphs in real time
- Stay responsive at 1,000+ tickets
Try It Yourself
tk triage and watch it complete before you can blink.
The Benchmark Suite
Want to reproduce these results? We maintain a benchmark suite:- Repeated benchmarks (30 iterations, 500 tickets)
- Scaling analysis (10, 50, 100, 500, 1,000 tickets)
- Statistical analysis with confidence intervals
pypi/benchmarks/BENCHMARK_REPORT.md
Benchmark data (JSON): web/src/data/benchmark-data.json