Stop Writing Goals Like Build Me an App
The problem is not that Codex or Claude Code cannot work fast. The problem is that vague goals quietly return the work to the human.
When I use Codex or Claude Code for a while, I often notice the same pattern.
“Keep going.” “Fix this too.” “Run the tests again.” “Update the docs as well.”
AI is fast. That part is true.
But if a person has to keep giving the next instruction, the bottleneck has simply moved back to the person.
/goal is useful because it reduces that repetition.
It is not just another prompt. It is closer to telling the AI what “done” means, then letting it check, fix, and continue until it reaches that state.
The point of
/goalis not to make AI do more work. It is to give AI the standard for stopping.
A Good Goal Describes The Finished State
People often write something like this:
/goal Build me an app.
This is a weak goal.
Not because it is too short. Because it does not say what finished means.
What is an app? Is it done when the screen loads? Does it need login? Should data be saved? Should there be no errors? Should tests pass?
When the goal is unclear, AI usually does one of two things.
It stops too early, or it keeps circling around the work.
A better /goal looks like this:
/goal Fix the login flow until all login-related tests pass, lint succeeds, and the build completes successfully.
This is much stronger because the finished state is visible.
The AI can check:
-
Did the login tests pass?
-
Did lint pass?
-
Did the build succeed?
When the standard can be checked, AI can judge completion more reliably.
If I want AI to work for longer without constant supervision, I do not only tell it what to do. I tell it when to stop.
What A Good /goal Usually Needs
A useful /goal usually includes three things:
-
What the finished state looks like
-
What must not be changed
-
Which commands prove the work is done
For example:
/goal Migrate the old writing flow to the new flow while preserving existing behavior. Do not delete or skip tests. Continue until pnpm test, pnpm lint, and pnpm build all pass.
With this level of detail, the AI does not stop after changing code.
It checks the result. If something fails, it fixes it. If something is missing, it continues.
A good goal is less about “what to do” and more about “what state counts as done.”
/goal Is A Work Method, Not A Prompt Trick
The strength of /goal is not that one answer becomes better.
The strength is that after each step, the AI looks at the current state again.
-
Did a test fail?
-
Did lint find a problem?
-
Did the build fail?
-
Is documentation missing?
-
Is there still unfinished work?
If something is incomplete, it moves to the next step without the person typing “next” again.
For example, “refactor this” is unstable. The AI does not know how far to go.
This is steadier:
/goal Merge the duplicated save logic into one path without changing user-visible behavior. Add tests so the same issue does not return, and continue until existing tests and lint all pass.
This goal has a finish line. It has boundaries. It has a way to verify the result.
AI needs those boundaries if we want it to act with some autonomy.
Bad Goals And Good Goals
A bad goal usually looks like this:
/goal Make the app better.
The problem is clear.
There is no standard for “better.” There is no finish line. There is no way to verify the result.
That makes it easy for AI to widen the scope on its own.
A good goal looks more like this:
/goal Fix the settings page so users can see an error when saving fails. Add a test so the issue does not return, and continue until pnpm test and pnpm lint both pass. Do not change the page URL or server command structure.
This goal gives the AI something it can judge:
-
What problem to fix
-
What test to add
-
Which commands to run
-
What must not change
The more work we delegate to AI, the more these standards matter.
The Method I Recommend Most
I usually do not write the /goal myself first.
This can sound strange, but in practice it works better. People tend to write goals too loosely.
AI, on the other hand, can read the project and often propose better goals for that specific codebase.
I usually start with something like this:
First inspect this project structure. Suggest three tasks that would work well as /goal assignments. For each one, explain why it is a good goal, what the finished state should be, what constraints should be kept, and which commands should verify it. End with a ready-to-paste /goal sentence.
This lets the AI generate goal candidates that fit the project.
For a frontend app, that may include screen tests, lint, and build checks. For a Rust or Tauri app, it may include Rust tests and type checks.
The important point is not that I memorize every command.
The important point is that I make the AI read the project first, then ask it to shape a good goal.
More important than writing a good
/goalis knowing how to make AI produce one.
Where /goal Works Especially Well
/goal is not needed for every task.
For a small change, a direct instruction is often faster.
But it works well when the task requires repeated checking.
For a larger cleanup:
/goal Replace all usage of the old helper function with the new helper function. Do not change user-visible behavior, and continue until all tests and lint checks pass.
For broken tests:
/goal Fix all currently failing tests. If there was a real bug, add coverage so it does not return. Do not delete or skip tests.
For documentation:
/goal Add examples to the public usage docs and remove outdated explanations. Also check that links inside the docs are not broken.
For repetitive editorial work:
/goal Add two relevant internal links to each of the 30 most recent posts. Do not add links that feel awkward in context.
These are the kinds of tasks where it is better to give AI the finish line than to keep typing “continue.”
Without Constraints, AI Finds The Easy Path
The most risky part of long AI work is that the result can look complete without being correct.
If I only say “make the tests pass,” the AI may remove a failing test.
If I only say “make lint pass,” it may weaken types.
If I only say “make the build pass,” it may route around the actual problem.
So a /goal should include restrictions.
Useful constraints include:
-
Do not delete or skip tests
-
Do not change user-visible behavior
-
Do not add new libraries
-
Do not modify unrelated files
-
Do not reduce type safety
-
Do not hide a real issue with fake data
If we give AI autonomy, we also need to give it boundaries.
Long Runs Need A Stop Condition
/goal can run for a long time.
For larger work, I prefer to add a stop condition.
For example:
If this is not complete after 20 attempts, stop and summarize what remains and why it is blocked.
Even if the goal is not reached, the AI has to organize the current state at a certain point.
This reduces the chance of repeating the same failure again and again.
Claude Code And Codex Feel Different
Claude Code and Codex can both work toward a goal over a longer session. The feeling is slightly different.
Claude Code often feels like it continues along a fixed goal. When used with an auto-approval mode, it can reduce mid-task confirmations and feel closer to handing off a block of work.
Codex often feels more like a cycle of planning, execution, and review. It reads the codebase, checks what remains, and moves into the next useful step.
But the important question is not which one is better.
Both are weak with vague instructions. Both are much stronger with goals that can be checked.
A Practical Template
The form I use most often is simple:
/goal [desired final state]. Done means: [condition 1], [condition 2], [condition 3]. Constraints: [things not to do]. Verify with: [command 1], [command 2], [command 3]. If blocked, stop and explain the exact reason.
For example:
/goal Stabilize the save experience on the publishing settings screen. Done means: success and failure states are clearly visible, repeated clicks on the save button do not create duplicate saves, and failed saves preserve the user's input. Constraints: do not change the existing route structure, do not add new dependencies, and do not remove current tests. Verify with pnpm test, pnpm lint, and pnpm build. If blocked, stop and explain the exact reason.
This is not beautiful prose.
But it is useful.
It gives AI a destination, a boundary, and a measuring tool.
That matters more than a bigger prompt.
The Real Shift
A lot of people still treat AI coding tools like faster autocomplete.
That is useful, but it is not the main shift.
The bigger shift is that AI can now hold a task across multiple steps. It can inspect, change, test, fix, and repeat.
But only if the task has a shape it can hold.
A vague prompt asks AI to guess.
A good goal gives it a state to reach.
The future of AI coding is not bigger prompts. It is better definitions of done.
That is why I think /goal matters.
It changes the interaction from constant instruction to supervised delegation.
And that is the difference between using AI as a fast tool and using it as a real working partner.
