Ad Hoc Queries#

Practical patterns for common gh-velocity tasks.

Compare two releases#

gh velocity quality release v2.0.0 --results json > v2.json
gh velocity quality release v1.9.0 --results json > v1.json

echo "v1.9.0 median lead time: $(jq -r '.aggregates.lead_time.median_seconds / 86400 | round | "\(.)d"' v1.json)"
echo "v2.0.0 median lead time: $(jq -r '.aggregates.lead_time.median_seconds / 86400 | round | "\(.)d"' v2.json)"

Compare bug ratios:

echo "v1.9.0 bug ratio: $(jq -r '.composition.bug_ratio * 100 | round | "\(.)%"' v1.json)"
echo "v2.0.0 bug ratio: $(jq -r '.composition.bug_ratio * 100 | round | "\(.)%"' v2.json)"

Find your slowest issues#

gh velocity quality release v1.2.0 --results json | \
  jq -r '.issues | sort_by(-.lead_time_seconds) | .[0:5] | .[] |
    "#\(.number) \(.title[0:40]) -- \(.lead_time_seconds / 86400 | round)d"'

Check label coverage before a release#

gh velocity quality release v1.2.0 --results json | \
  jq '"Bug: \(.composition.bug_count), Feature: \(.composition.feature_count), Unlabeled: \(.composition.other_count)"'

If other_count is high, label your issues before publishing the release. Run gh velocity config preflight to discover available labels and generate matching category matchers.

Use --since to override the previous tag#

When the auto-detected previous tag is wrong (non-linear tag history, pre-releases mixed with stable), override explicitly:

gh velocity quality release v2.0.0 --since v1.9.0
gh velocity quality release v2.0.0 --since v1.9.0 --discover

The --discover flag shows which issues and PRs each linking strategy found, which helps debug unexpected results.

Analyze a repo you don't have locally#

Every command works with -R (or --repo):

gh velocity quality release v0.28.0 -R charmbracelet/bubbletea
gh velocity flow lead-time 500 -R charmbracelet/bubbletea
gh velocity quality release v5.2.1 -R go-chi/chi --discover
gh velocity flow throughput --since 30d -R cli/cli

All commands work remotely. Cycle time uses API-based signals (PR creation date, label events, project status). Running from a local checkout adds commit counts and a fallback signal from commit history.

Generate a report for every release#

for tag in $(gh api repos/owner/repo/tags --jq '.[].name' | head -5); do
  echo "=== $tag ==="
  gh velocity quality release "$tag" -R owner/repo 2>/dev/null
  echo
done

To save each report as JSON for later analysis:

mkdir -p reports
for tag in $(gh api repos/owner/repo/tags --jq '.[].name' | head -5); do
  gh velocity quality release "$tag" -R owner/repo --results json > "reports/${tag}.json" 2>/dev/null
done

Export to CSV for spreadsheet analysis#

gh velocity quality release v1.2.0 --results json | \
  jq -r '["number","title","lead_time_days","cycle_time_days","outlier"],
    (.issues[] | [
      .number,
      .title,
      ((.lead_time_seconds // 0) / 86400 | round),
      ((.cycle_time_seconds // 0) / 86400 | round),
      .lead_time_outlier
    ]) | @csv' > release-metrics.csv

Use --scope for ad-hoc filtering#

The --scope flag adds GitHub search qualifiers that are AND'd with any scope.query in your config:

# Only issues assigned to a specific person
gh velocity flow lead-time --since 30d --scope "assignee:octocat"

# Only issues with a specific label
gh velocity flow throughput --since 30d --scope "label:team-backend"

# Combine multiple qualifiers
gh velocity report --since 30d --scope "label:team-frontend assignee:alice"

Check what each linking strategy found#

The --discover flag on quality release shows what each linking strategy (pr-link, commit-ref, changelog) discovered:

gh velocity quality release v1.2.0 --discover

The output lists issues found by each strategy and marks items that appear in multiple strategies. Use this to understand how well the strategies cover your workflow and whether you need to adjust commit_ref.patterns in your config.

Bulk lead-time analysis#

Get per-issue lead times for all issues closed in a window:

gh velocity flow lead-time --since 30d --results json | \
  jq -r '.issues[] | "#\(.number) \(.title[0:40]) -- \(.lead_time_seconds / 86400 | round)d"'

Cycle time for a specific PR#

Measure a single PR directly (ignores the configured strategy):

gh velocity flow cycle-time --pr 99
gh velocity flow cycle-time --pr 99 --results json

Always uses PR created-to-merged timing, regardless of cycle_time.strategy in config.

Weekly velocity in JSON for dashboards#

gh velocity report --since 7d --results json > weekly.json

Extract key numbers:

jq '{
  issues_closed: .throughput.issues_closed,
  prs_merged: .throughput.prs_merged,
  median_lead_time_days: (.lead_time.median_seconds / 86400 | round),
  median_cycle_time_days: (.cycle_time.median_seconds / 86400 | round)
}' weekly.json

Post a report and save it locally#

# Save and post in one go
gh velocity report --since 30d --results markdown | tee report.md | \
  gh issue create --title "Weekly metrics" --body-file -

Prep for a 1:1 with my-week#

Get a personal summary of your recent activity — issues closed, PRs merged, reviews done — plus what's blocked and what's ahead. Works from anywhere, no repo context needed:

gh velocity status my-week

By default this shows all your activity across every repository. To limit to a single repo (which also shows releases):

gh velocity status my-week -R owner/repo

Customize the lookback period:

gh velocity status my-week --since 14d

The output includes:

  • Insights — shipping velocity, AI-assisted PR percentage, median and p90 lead time
  • Waiting on — PRs waiting for first review and stale issues, surfaced early for action
  • What I shipped — issues closed, PRs merged, PRs reviewed, releases (single-repo only)
  • What's ahead — open issues and PRs with status annotations (new, stale, needs review)
  • Review queue — PRs from others waiting on your review

PRs authored with AI tools (Claude Code, Copilot, etc.) are tagged [ai] based on Co-Authored-By trailers and PR body badges. The insights section shows the overall AI-assisted percentage.

If a .gh-velocity.yml config is present, its scope.query and exclude_users settings apply automatically. Without a config file, the command still works — you just won't get cycle time metrics or user exclusions.

Next steps#