Getting Started: Where to Put What
16 minute read
Each configuration mechanism serves a different purpose. Placing information in the right mechanism controls context cost: it determines what every agent pays on every invocation, and what must be loaded only when needed.
Configuration Mechanisms
| Mechanism | Purpose | When loaded |
|---|---|---|
| Project context file | Project facts every agent always needs | Every session |
| Rules (system prompts) | Per-agent behavior constraints | Every agent invocation |
| Skills | Named session procedures - the specification | On explicit invocation |
| Commands | Named invocations - trigger a skill or a direct action | On user or agent call |
| Hooks | Automated, deterministic actions | On trigger event - no agent involved |
Project Context File
The project context file is a markdown document that every agent reads at the start of every session. Put here anything that every agent always needs to know about the project. The filename differs by tool - Claude Code uses CLAUDE.md, Gemini CLI uses GEMINI.md, OpenAI Codex uses AGENTS.md, and GitHub Copilot uses .github/copilot-instructions.md - but the purpose does not.
Put in the project context file:
- Language, framework, and toolchain versions
- Repository structure - key directories and what lives where
- Architecture decisions that constrain all changes (example: “this service must not make synchronous external calls in the request path”)
- Non-obvious conventions that agents would otherwise violate (example: “all database access goes through the repository layer; never access the ORM directly from handlers”)
- Where tests live and naming conventions for test files
- Non-obvious business rules that govern all changes
Do not put in the project context file:
- Task instructions - those go in rules or skills
- File contents - load those dynamically per session
- Context specific to one agent - that goes in that agent’s rules
- Anything an agent only needs occasionally - load it when needed, not always
Because the project context file loads on every session, every line is a token cost on every invocation. Keep it to stable facts, not procedures. A bloated project context file is an invisible per-session tax.
Rules (System Prompts)
Rules define how a specific agent behaves. Each agent has its own rules document, injected at the top of that agent’s context on every invocation. Rules are stable across sessions - they define the agent’s operating constraints, not what it is doing right now.
Put in rules:
- Agent scope: what the agent is responsible for, and explicitly what it is not
- Output format requirements - especially for agents whose output feeds another agent (use structured JSON at these boundaries)
- Explicit prohibitions (“do not modify files not in your context”)
- Early-exit conditions to minimize cost (“if the diff contains no logic changes, return
{"decision": "pass"}immediately without analysis”) - Verbosity constraints (“return code only; no explanation unless explicitly requested”)
Do not put in rules:
- Project facts - those go in the project context file
- Session-specific information - that is loaded dynamically by the orchestrator
- Multi-step procedures - those go in skills
Rules are placed first in every agent’s context. This placement is a caching decision, not just convention. Stable content at the top of context allows the model’s server to cache the rules prefix and reuse it across calls, which reduces the effective input cost of every invocation. See Tokenomics for how caching interacts with context order.
Rules are plain markdown, injected at session start. The content is the same regardless of tool; where it lives differs.
Skills
A skill is a named session procedure - a markdown document describing a multi-step workflow that an agent invokes by name. The agent reads the skill document, follows its instructions, and returns a result. A skill has no runtime; it is pure specification in text. Claude Code calls these commands and stores them in .claude/commands/; Gemini CLI uses .gemini/skills/; OpenAI Codex supports procedure definitions in AGENTS.md; GitHub Copilot reads procedure markdown from .github/.
Put in skills:
- Session lifecycle procedures: how to start a session, how to run the pre-commit review gate, how to close a session and write the summary
- Pipeline-restore procedures for when the pipeline fails mid-session
- Any multi-step workflow the agent should execute consistently and reproducibly
Do not put in skills:
- One-time instructions - write those inline
- Anything that should run automatically without agent involvement - that belongs in a hook
- Project facts - those go in the project context file
- Per-agent behavior constraints - those go in rules
Each skill should do one thing. A skill named review-and-commit is doing two things. Split it. When a procedure fails mid-execution, a single-responsibility skill makes it obvious which step failed and where to look.
A normal session runs three skills in sequence: /start-session (assembles context and prepares the implementation agent), /review (invokes the pre-commit review gate), and /end-session (validates all gates, writes the session summary, and commits). Add /fix for pipeline-restore mode. See Coding & Review Setup for the complete definition of each skill.
The skill text is identical across tools. Where the file lives differs:
| Tool | Skill location |
|---|---|
| Claude Code | .claude/commands/start-session.md |
| Gemini CLI | .gemini/skills/start-session.md |
| OpenAI Codex | Named ## Task: section in AGENTS.md |
| GitHub Copilot | .github/start-session.md |
Commands
A command is a named invocation - it is how you or the agent triggers a skill. Skills define what to do; commands are how you call them. In Claude Code, a file named start-session.md in .claude/commands/ creates the /start-session command automatically. In Gemini CLI, skills in .gemini/skills/ are invoked by name in the same way. The command name and the skill document are one-to-one: one file, one command.
Put in commands:
- Short-form aliases for frequently used skills (example:
/reviewinstead of “run the pre-commit review gate”) - Direct one-line instructions that do not need a full skill document (“summarize the session”, “list open scenarios”)
- Agent actions you want to invoke consistently by name without retyping the instruction
Do not put in commands:
- Multi-step procedures - those belong in a skill document that the command references
- Anything that should run without being called - that belongs in a hook
- Project facts or behavior constraints - those go in the project context file or rules
A command that runs a multi-step procedure should invoke the skill document by name, not inline the steps. This keeps the command short and the procedure in one place.
Hooks
Hooks are automated actions triggered by events - pre-commit, file-save, post-test. Hooks run deterministic tooling: linters, type checkers, secret scanners, static analysis. No agent decision is involved; the tool either passes or blocks.
Put in hooks:
- Linting and formatting checks
- Type checking
- Secret scanning
- Static analysis (SAST)
- Any check that is fast, deterministic, and should block on failure without requiring judgment
Do not put in hooks:
- Semantic review - that requires an agent; invoke the review orchestrator via a skill
- Checks that require judgment - agents decide, hooks enforce
- Steps that depend on session context - hooks operate without session awareness
Hooks run before the review agent. If the linter fails, there is no reason to invoke the review orchestrator. Deterministic checks fail fast; the AI review gate runs only on changes that pass the baseline mechanical checks.
Git pre-commit hooks are independent of the AI tool - they run via git regardless of which model you use. Claude Code and Gemini CLI additionally support tool-use hooks in their settings.json, which trigger shell commands in response to agent events (for example, running linters automatically when the agent stops). OpenAI Codex and GitHub Copilot do not have an equivalent built-in hook system; use git hooks directly with those tools.
The AI review step (/review) runs after these pass. It is invoked by the agent as part of the session workflow, not by the hook sequence directly.
Decision Framework
For any piece of information or procedure, apply this sequence:
- Does every agent always need this? - Project context file
- Does this constrain how one specific agent behaves? - That agent’s rules
- Is this a multi-step procedure invoked by name? - A skill
- Is this a short invocation that triggers a skill or a direct action? - A command
- Should this run automatically without any agent decision? - A hook
Context Loading Order
Within each agent invocation, load context in this order:
- Agent rules (stable - cached across every invocation)
- Project context file (stable - cached across every invocation)
- Feature description (stable within a feature - often cached)
- BDD scenario for this session (changes per session)
- Relevant existing files (changes per session)
- Prior session summary (changes per session)
- Staged diff or current task context (changes per invocation)
Stable content at the top. Volatile content at the bottom. Rules and the project context file belong at the top because they are constant across invocations and benefit from server-side caching. Staged diffs and current files change on every call and provide no caching benefit regardless of where they appear.
File Layout
The examples below show how the configuration mechanisms map to Claude Code, Gemini CLI, OpenAI Codex CLI, and GitHub Copilot. The file names and locations differ; the purpose of each mechanism does not.
The skill and command documents are plain markdown in all cases - the same procedure
text works across tools because skills are specifications, not code. In Claude Code,
the commands directory unifies both: each file in .claude/commands/ is a skill
document and creates a slash command of the same name. The .claude/agents/ directory
is specific to Claude Code - it defines named sub-agents with their own system prompt
and model tier, invocable by the orchestrator. Other tools handle agent configuration
programmatically rather than via files. For multi-agent architectures and advanced
agent composition, see Agentic Architecture Patterns.
Decomposed Context by Code Area
A single project context file at the repo root works for small codebases. For larger ones with distinct bounded contexts, split the project context file by code area. Claude Code, Gemini CLI, and OpenAI Codex load context files hierarchically: when an agent works in a subdirectory, it reads the context file there in addition to the root-level file. Area-specific facts stay out of the root file and load only when relevant, which reduces per-session token cost for agents working in unrelated areas.
What goes in area-specific files: Facts that apply only to that area - domain rules, local naming conventions, area-specific architecture constraints, and non-obvious business rules that govern changes in that part of the codebase. Do not repeat content already in the root file.
Related Content
- Agentic Architecture Patterns - the design principles behind skills, agents, hooks, and multi-agent composition
- Coding & Review Setup - the complete rules, skills, and hooks for a coding and pre-commit review configuration
- Small-Batch Sessions - how session discipline and context hygiene work together
- Tokenomics - the full optimization framework including prompt caching strategy and context order