Published on

Codifying taste

Authors

When code is cheap, evaluation becomes the work.

Something changes when a team starts leaning on AI agents.

Code appears faster. It compiles. Tests pass. Variable names look right.

But a senior reads it and pauses. Something is off.

The abstraction is slightly wrong. The error handling is performative. The boundary leaks. The clever part will rot in six months.

That feeling is taste.

For years, it lived quietly in a few people’s heads. The ones who could say “no, not like that” — sometimes without fully explaining why. Mentorship was the slow transfer of that instinct, one rejected pull request at a time.

AI does not remove taste. It makes it the bottleneck.

Agents are fluent. They generate plausible code endlessly.

What they do not have is judgment.

The sense that this abstraction fits here. That this error must not be swallowed. That this service boundary is wrong. That this migration should not be generated.

And juniors working with these agents do not develop that judgment by default.

They develop a different instinct:

If it compiles and the agent sounds confident, ship it.

That is the real risk.

Not bad code. Bad judgment.

Making judgment visible

So the work shifts.

Not writing code. Not even generating it.

But codifying taste — pulling judgment out of people’s heads and making it visible.

In practice, it looks simple.

A small archive of agent output versus final code, annotated with why.

Not “cleaner,” but specifics.

This try/catch hides a failure mode we care about.
This introduces a circular dependency we removed on purpose.
This abstraction looks generic but breaks a boundary we rely on.

A rules file the team maintains together.

Not theoretical guidance. Concrete constraints.

No barrel exports.
DTOs live with their controller.
Migrations are always reviewed manually.

Partly for the agents. Mostly for the humans.

And then the most valuable artifact: recorded course-corrections.

Sessions where someone pushed back on the agent multiple times before landing on something usable.

That loop — reject, refine, redirect — is where taste becomes visible.

The skill that remains

The deeper shift is this:

AI compresses the production of code.

It does not compress the evaluation of it.

So evaluation becomes the skill.

Teams that ignore this get faster, but weaker — shipping plausible code that nobody trusts.

Teams that codify it get stronger — because judgment stops being individual and becomes shared.

In systems where correctness matters — banking, money movement, stateful workflows — this is not optional.

A passing test suite is not enough.

A confident agent is not enough.

Taste is what decides whether the system holds under pressure.

Taste used to be the last thing to transfer.

Now it needs to be the first.