Why We're Doing This
This is a big shift. Whether you’ve been writing code for ten years or ten months, what you’re about to read will change how your day-to-day works. That might feel exciting, unsettling, or both. All of those reactions are fair. Read the whole chapter before you decide how you feel about it.
The short version
Section titled “The short version”AI coding agents can now do a big chunk of the coding we used to do by hand. Not perfectly. They make mistakes. They need watching. They can’t make design decisions. But for clear, well-scoped tasks? They’re fast, they don’t get tired, and they get better every month.
We’re adopting these tools to:
- Ship more with the same team. We’re a small crew. Agents let us punch above our weight.
- Spend time on work that matters. Design, architecture, client conversations, code review — the stuff that needs a human brain.
- Spend less time on work that doesn’t. Boilerplate, repetitive CRUD, test scaffolding, docs updates, dependency bumps.
- Not burn out doing it. Agents can run in the background while you’re in a meeting, eating lunch, or done for the day. This is what Balance looks like in practice.
What actually changes
Section titled “What actually changes”Your day-to-day is going to flip. Here’s the honest version:
Before (where most of us are now)
Section titled “Before (where most of us are now)”You know the loop: ticket, IDE, code, tests, PR, review comments, fix, merge, repeat. AI helps with autocomplete and answering questions in chat. That’s about it.
It works. But it doesn’t scale. You hit a ceiling where the only way to ship more is to work more hours, and that ceiling gets low fast on a small team. The frustrating part isn’t the coding itself. It’s the time spent on work that doesn’t require your judgment but still eats your day.
Your time looks roughly like: 50% writing code, 15% planning, 35% everything else.
After (where we’re headed)
Section titled “After (where we’re headed)”You get a ticket — but a good one. It tells you what needs to happen and why, with clear acceptance criteria. You read it and decide: is this unique enough that I need to spec the technical approach myself, or does an existing pattern cover it? If it’s covered, you delegate it straight to an agent. If it’s unique, you write a short technical spec, break it into small pieces, and hand those to the agent. Either way, the agent writes the code, runs the tests, and opens a PR. You review it carefully (you didn’t write it, so you need to actually read it). You fix what’s wrong or send it back. It merges. Repeat, but faster, and in parallel.
This only works if tickets are good. A vague ticket wastes everyone’s time, human or AI. We’ve put together a gold-standard ticket template for this. Every section maps to what an agent needs: context, desired outcome, testable criteria, constraints, and a clear definition of done.
When you pick up a ticket, you’re making one decision: spec or delegate?
- Delegate directly — you’ve solved this kind of problem before and captured the pattern (more on this in Phase 6). The ticket description is enough for the agent to do the work. The pattern provides the technical how; the ticket provides the what and why.
- Spec first — the implementation is novel, touches sensitive areas, or involves architectural decisions. You write a short technical spec covering the approach, then break it into agent-ready tasks. You can use the AI to help draft the spec too. Over time, even spec-writing becomes a captured pattern.
In practice, there’s more nuance here than two bullet points can capture. The Spec-First Workflow chapter covers the full decision framework. For now, the key thing is that delegation isn’t binary. Some tasks are a clean hand-off, some need a detailed spec, and some need you at the keyboard. The judgment to tell the difference is what we’re building.
Either path works. The ticket quality is what makes both paths possible.
Your time looks roughly like: 55% planning and specifying, 20% writing code, 25% reviewing and testing.
That’s not a typo. Planning becomes your main job. Your exact split will vary (greenfield architecture looks different from maintaining a legacy system) but the direction holds across the board.
If you’re newer to engineering, that might feel intimidating. I’m still learning to code, and now I have to learn to spec too? Yes, but nobody writes great specs on day one. That’s fine. The gold-standard ticket template is there to guide you, and we’ll practice together.
If you’re more experienced, the shift might feel different: I spent years getting great at building things, and now you want me to stop? Not quite. The things that made you a great coder — understanding systems, anticipating edge cases, knowing what “done” looks like — those are exactly the things that make a great spec. You’re not losing value. You’re applying it somewhere it matters more.
KodeNerds, a dev shop that built a HIPAA-compliant healthcare platform, saw exactly this shift. Their team went from 50% implementation / 15% planning to 20% implementation / 55% planning (KodeNerds, “Intent-Driven Development 2026”). Sounds backwards until you try it. The better your tickets and specs, the better the agent’s output. The better the output, the less time you spend fixing things.
The mental model
Section titled “The mental model”Think of a coding agent as a brilliant intern who codes really fast but knows nothing about your project. They’ve never met the client. They don’t know why you chose that architecture. They don’t know about the weird bug in the billing module. Your job shifts from “write the code” to “make sure the intern knows exactly what to build, and catch it when they build the wrong thing.”
That means three things change:
Your specs need to be better. Here’s why that matters: when a human coder gets a vague ticket, they come over and ask you questions. An agent won’t do that. It’ll just… build something. Confidently. And what it builds will look right (it’ll pass a casual glance) but the logic might be quietly wrong. So the quality of what you get out is directly tied to what you put in. The good news is that writing a solid spec is a learnable thing. We’ve got templates for it, and you’ll get plenty of practice.
Your reviews need to be sharper. Here’s the thing about agent-generated code: it sounds very sure of itself, even when it’s wrong. You’ll be reading code you didn’t write, and your job is to catch the subtle stuff. Faros AI studied 10,000+ developers and found that teams using agents merged nearly twice as many PRs, but spent 91% more time reviewing them. That’s real. But it’s not wasted time. If you’re more experienced, think of it as less building and more quality control. The underlying ability (reading systems, spotting issues) is the same one that made you good at building in the first place. If you’re newer, this is actually great news. Every review is a window into how problems get solved. Different patterns, different trade-offs, real code in real contexts. You’ll build judgment faster reviewing agent output than you would writing everything from scratch.
Your documentation becomes infrastructure. The files that tell agents how to behave (CLAUDE.md, AGENTS.md and similar) are now as important as your CI pipeline. They’re how you control what agents do and don’t do. Think of them as the onboarding doc you’d write for that brilliant intern. The Context Files chapter goes deep on this.
What this is NOT
Section titled “What this is NOT”Let’s be clear about two things.
It’s not vibe coding
Section titled “It’s not vibe coding”“Vibe coding” means accepting AI output without really checking it. It’s fun for side projects. It’s dangerous for client work. It ships bugs, opens security holes, and across longer sessions, creates something called drift. Anthropic’s own engineering team documents how context degrades over time: architectural decisions get lost, small choices compound, and the codebase quietly diverges from its original design. Their solution is aggressive context management that explicitly “preserves architectural decisions, unresolved bugs, and implementation details” (Anthropic, “Effective Context Engineering”). Without that discipline, you get drift.
Drift is sneaky. Nobody makes a bad decision on purpose. The agent just quietly makes small choices that add up. Before you know it, your architecture looks different from what you designed.
That said, experiment freely, just not in client code. Spin up a throwaway branch. Give the agent a wild prompt. See what happens. That’s how you build intuition for what it can and can’t do. The line is simple: explore in sandboxes, be disciplined in production.
This is Integrity in action. We don’t ship code we don’t understand. Right now, that means reading every line the agent writes. Every developer, regardless of experience level, shares that responsibility. Senior devs: you’re the last line of defense in reviews. Junior devs: your fresh eyes catch things that familiarity blinds people to. Over time, it means writing specs so precise and test suites so solid that only quality can get through.
It’s not about replacing anyone
Section titled “It’s not about replacing anyone”Let’s address the elephant in the room. When people hear “AI coding agents,” they worry about their jobs. That’s natural, and we’re not going to pretend otherwise.
Here’s the truth: we need you more, not less. The work is shifting, not disappearing. Someone has to understand the client’s business. Someone has to design systems that hold up under pressure. Someone has to look at what the agent produced and say “this is wrong, and here’s why.” That someone is you.
The developers who thrive in this shift aren’t the ones who adopt the most tools. They’re the ones who develop judgment. The ability to tell when the AI is confidently wrong. To catch the subtle bug that passes every test. To know when the spec needs rethinking. That’s a deeply human thing, and it gets more valuable over time, not less.
A few things that might be on your mind:
- “Who’s accountable when agent code has a bug?” You are, same as always. You reviewed it, you approved the PR, your name is on it. The agent is a tool, not a scapegoat. That’s not new pressure. It’s the same standard we’ve always held, applied to a new workflow.
- “Will the quality bar drop because the AI is fast?” No. Speed without quality is just faster failure. Our standards don’t change because the code was written by an agent. If anything, the review bar goes up.
- “Do I lose the freedom to choose my own approach?” No. You choose how to spec the work. You choose what to delegate and what to build yourself. This course gives you a framework, not a straitjacket.
We’re investing in you learning this because we believe in this team. That’s Growth. Not just getting faster, but getting better at work that matters.
How we’re doing this differently
Section titled “How we’re doing this differently”You’ve probably seen what AI adoption looks like at other companies. It usually goes one of two ways, and both are ugly.
The first is top-down mandate: leadership reads a McKinsey report, buys enterprise licenses, sends an all-hands email, and expects productivity gains by next quarter. No training. No workflow changes. No support. Developers are left to figure it out alone while being measured on speed metrics that ignore quality. “Use AI or fall behind” — said by people who’ve never opened a terminal.
The second is worse: AI as a headcount argument. We don’t need as many juniors now. The AI can do that work. Teams get cut. The remaining developers are expected to absorb the load with their shiny new tools. Morale craters. The best people leave. The codebase rots because nobody has time to think. They’re too busy feeding prompts to an agent and reviewing output they don’t fully understand.
That’s not us. Here’s what’s different:
- We’re training you before we expect results. This course, the Anthropic courses, the practice time — it’s all investment. We don’t expect you to be fluent on day one.
- We’re giving you room to learn. You’re not figuring this out alone. We’ll pair on it, share what’s working, and nobody is going to be judged for struggling with a new thing.
- We’re protecting the quality bar, not sacrificing it. Faster output doesn’t matter if it’s broken. Our standards go up, not down.
- We’re keeping humans in charge. The agent works for you. You don’t work for the agent. Full stop.
This isn’t a mandate. It’s an opportunity we’re building together.
Why now
Section titled “Why now”You might wonder: why not wait until the tools are more mature?
Because they’re good enough now to make a real difference, and the teams that build these habits early will have a serious edge.
But honestly, the hard part isn’t the tools. It’s the habits. Learning to write good specs. Reviewing agent code critically. Managing parallel sessions without losing track. These things take practice. Starting now means we’ll be comfortable by the time the next generation of tools arrives.
And here’s the thing: these habits make you a better engineer even without the AI. Writing clearer specs? Reviewing code more carefully? Breaking work into small, well-defined pieces? That’s just good engineering. The agents are the catalyst, but the skills are yours to keep.
What’s next
Section titled “What’s next”Before you do anything else: join the Rokkit200 Claude team account. You’ll need this for the Claude 101 course and for everything that follows.
Sign in with an existing Anthropic account or create a new one. Once you’ve joined, you’re on a Team Standard seat.
Then: the Claude 101 course. Even if you’ve been using AI tools already, it covers what Claude actually is, how to have effective conversations with it, and the prompt techniques that underpin everything else in this playbook. It’s quick, and it’ll give us all a shared vocabulary before we get into the hands-on tooling.
☁ Next up: Claude 101 on Anthropic Skilljar — take it, then come back for the next chapter.