Files
apes/.claude/skills/codex-review/SKILL.md
limiteinductive e940afde52 S1: Colony backend skeleton — Axum + SQLite, channels + messages CRUD
Monorepo structure:
- crates/colony-types: API types (serde + ts-rs), separate from DB models
- crates/colony: Axum server, SQLite via sqlx, migrations

Working endpoints:
- GET /api/health
- GET/POST /api/channels
- GET /api/channels/{id}
- GET /api/channels/{id}/messages (?since=, ?type=, ?user_id=)
- POST /api/channels/{id}/messages (with type + metadata)

Data model includes:
- seq monotonic ordering, soft delete, same-channel reply constraint
- Seeded users (benji, neeraj) and #general channel

Also: codex-review skill, .gitignore

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 18:54:43 +02:00

1.5 KiB

name, description, argument-hint
name description argument-hint
codex-review Get a parallel GPT-5.4 review of code, plans, or decisions. Use when you want a second opinion, finished a non-trivial implementation, or want to validate an approach before presenting to the ape. [what to review — file paths, plan description, or question]

Codex Review

Get an independent GPT-5.4 review via codex CLI. Use this proactively after finishing non-trivial work, or when you want a second opinion before proceeding.

When to use

  • After writing a significant chunk of code
  • Before presenting an architecture decision to the ape
  • When you're unsure about an approach
  • After a refactor to catch regressions
  • When the critic skill feels too heavy (critic is for decisions, this is for code review)

Execution

Run this command via Bash, substituting the review request:

codex exec -c 'reasoning_effort="high"' "Review: $ARGUMENTS. Read all relevant files. Be specific — file paths, line numbers, exact issues. What bugs, edge cases, or design problems exist? Do NOT spawn sub-agents. Answer directly in bullet points." 2>&1

Can run in background (run_in_background: true) if you have other work to do in parallel.

Output

Present the codex findings to the ape as:

  • Issues found (if any)
  • Agreements with your approach (if reviewing your own work)
  • Disagreements to address

If codex found real issues in your code, fix them before presenting to the ape. Apes don't do tasks.