Author: Trent Cockerham

  • We Rebuilt the Platform. More Importantly, We Rebuilt How We Build.

    We Rebuilt the Platform. More Importantly, We Rebuilt How We Build.

    Four years ago, we built the platform Psych Hub needed.

    It worked. It grew. It supported real customers. It carried us a long way.

    But over time, something became obvious.

    The system we built was not designed for the future we were trying to create.

    The codebase was hard to reason about.
    The database needed to be rethought from the ground up.
    The user experience had years of friction layered into it.
    Our backlog was full of things we knew mattered, but could never quite justify prioritizing.

    And we are not a massive team.

    At some point, incremental improvement started to feel like the real risk.

    So we made a decision that felt, at the time, about an 8 out of 10 on the insanity scale.

    We stopped building on top of the old system.

    Not fully. We still supported the platform. We still handled major issues. We still had customers relying on us.

    But we stopped pretending the old foundation could stretch forever.

    We decided to rewrite the game mid-play.

    I told people: give us two weeks. If this is a bad idea, we will know quickly and go back.

    We never went back.

    Getting on the Tallest Lift

    When I was in middle school, I decided to learn to snowboard.

    Instead of starting on the bunny slope, my friends and I got on the tallest lift at the mountain.

    We all fell getting off.

    It took hours to get down that first run. Falling. Sliding. Getting up. Falling again.

    By day three, we were flying.

    That is what this rebuild felt like.

    We did not ease into it.
    We did not slowly refactor around the edges.
    We did not spend months creating a perfect plan before touching the product.

    We got on the biggest lift and committed to figuring it out on the way down.

    What We Actually Rebuilt

    In a matter of weeks, we migrated the platform we had spent years building into a completely new system.

    Not a cosmetic update.
    Not a fresh coat of paint.
    Not a rewrite for the sake of rewriting.

    A dramatically better user experience.
    A normalized database from the ground up.
    A new backend.
    A new frontend.
    A new back office.
    Multiple products and proof-of-concepts consolidated under one house.
    A year’s worth of customer feedback and backlog items addressed in one motion.

    The new platform brings together training, content, administration, enterprise workflows, and new AI-enabled capabilities in a way the old system never could.

    Things we thought we would never get to are now live.

    That is still a little surreal to say.

    But the bigger story is not only what we shipped.

    It is how we shipped it.

    The New Development Model

    This rebuild was made possible by an agent-native development pipeline we designed from scratch.

    Memory files live inside the repo and evolve with the product.
    Skills generate tickets and research requirements.
    Orchestration logic decides whether work needs a single agent or a larger sub-agent flow.
    Multiple models review implementation plans and code before we do.
    Code review happens before humans ever see the pull request.
    Agents write tests, update tests, and revise their own work.
    Infrastructure is managed as code.

    Humans are still deeply involved.

    We supervise.
    We refine.
    We challenge.
    We make judgment calls.
    We decide what good looks like.

    But we are not operating the same way anymore.

    We are not typing every line of code by hand.
    We are not spending weeks waiting for perfectly groomed tickets.
    We are not treating prototypes as disposable theater.

    The prototype is the spec.
    The review is the process.
    The loop is the operating model.

    Our retros now are not conversations about story points.

    They are conversations about workflow pain, agent performance, context quality, review quality, and where human judgment needs to be inserted earlier.

    We did not just rebuild the platform.

    We rebuilt how we build.

    Why It Had to Happen

    The uncomfortable truth was simple:

    With a small team and big ambition, the status quo was not going to get us there.

    Waiting would have been slower than rebuilding.

    The old architecture could not support the speed we needed. It could not support the quality bar we wanted. And it definitely could not support agents operating effectively inside it.

    If we wanted to build something much bigger than our team size suggested was possible, the foundation had to change.

    So we changed it.

    The Part I’m Most Proud Of

    I have been a primary contributor on this rebuild.

    Not because I am the only one capable.

    Because this new model makes it possible.

    I can move like a team.

    Research competitors.
    Spin up prototypes.
    Port them into real repos.
    Migrate data.
    Refine UX interactions.
    Test flows.
    Review implementation.
    Push the system until the product feels like a leap forward instead of another marginal improvement.

    Features that used to take weeks now take hours.

    Not because we are cutting corners.

    Because the machine removes friction.

    That is the part that is hard to fully explain until you experience it.

    A lot of product leaders talk about using AI to prototype. That is useful.

    But this is different.

    We are not just making clickable demos faster.

    We are building and shipping a real enterprise platform with agents as first-class contributors.

    That is a different chapter.

    What This Proves

    The platform is live.

    The migration is complete.

    The user experience is dramatically better. Features that sat in the backlog for months or years are now part of one cohesive system.

    And the team now has a new operating model for what comes next.

    That matters.

    Because the lesson here is not “AI can help you code faster.”

    That is too small.

    The lesson is that small teams can be much more powerful than they used to be if they are willing to rethink the entire system around how work gets done.

    Not just the tools.
    Not just the prompts.
    Not just the prototypes.

    The operating model.

    Small teams can be mighty when they build the machine around them.

    Four years ago, we built what we could with the tools and model we had.

    Today, we are building differently.

    And now that we have seen what is possible, there is no going back.

    When we got off that tallest lift in middle school, we fell. Everyone does the first time.

    But you do not remember the falls.

    You remember realizing you could ride.

    This rebuild felt impossible not long ago.

    Now it feels like the only way forward.

  • I Got Tired of Clicking “Continue”

    I Got Tired of Clicking “Continue”

    I didn’t set out to build a system that builds products.

    I just got tired of clicking “continue.”

    The first real shift didn’t happen with some big architectural decision. It happened because I was behind on a feature and decided to just vibe in Cursor and brute force it with AI.

    No process. No system. Just prompting and shipping.

    And it worked.

    That was the first “oh shit” moment.

    Not in a hype way. In a very practical way. I realized I could take something from idea to working code in hours instead of days. Not perfectly, but real enough to ship and iterate.

    That’s when we started taking it seriously.

    We moved into CLI-based workflows with Claude and began thinking about what a real process could look like if AI wasn’t just a helper in the IDE, but the thing actually doing the work.


    The Old Model

    Before this, we were operating like most product teams.

    Idea → spec → roadmap → sprint → feature.

    AI was there, but lightly. Mostly inside the IDE. Helping write code faster, not changing how we worked.

    It still required:

    • PRDs
    • sprint planning
    • story point estimation
    • backlog grooming
    • manual releases
    • a bunch of SaaS tools stitched together

    It worked. But it was slow.


    Where It Actually Broke

    The first version of this “new” workflow wasn’t a system.

    It was just us using AI more aggressively.

    We built out skills in Claude Code. Stored them in a repo. Refined them over time. Got to a point where we could consistently ship real code through it.

    But everything was still manual.

    I was the orchestrator.

    I was:

    • telling it to continue
    • reviewing changes locally
    • committing code
    • waiting for CodeRabbit feedback
    • going back to Claude to fix issues
    • repeating that loop multiple times

    At some point I realized I wasn’t building product anymore.

    I was managing a workflow.

    And most of that workflow was just me clicking “continue” and trying to keep multiple terminal tabs straight.


    Enter Dumb Eric

    That’s when we built the first version of the orchestrator.

    A deterministic system that could:

    • pull in tasks from Linear
    • execute steps in order
    • hand off between stages
    • pause when human input was needed
    • move forward automatically when it wasn’t

    We called it Dumb Eric because it wasn’t trying to be smart.

    It just ran the flow.

    That alone changed everything.


    The Next Problem

    Once the orchestrator was in place, a new gap showed up.

    The system worked, but it lacked judgment.

    It could execute steps, but it couldn’t reason about them well.

    So we layered an LLM on top.

    Now we had:

    • deterministic orchestration for structure
    • LLM reasoning for decision-making

    That combination turned out to be the real unlock.

    Not autonomous agents running wild.

    Not rigid workflows.

    The blend of both.


    Then It Started Improving Itself

    As we used the system more, it started breaking in predictable ways.

    Edge cases. Bugs. inefficiencies.

    Instead of fixing those manually, we built a self-improvement loop.

    The system could:

    • identify issues
    • create tickets for itself
    • propose improvements
    • implement changes

    At that point, something interesting happened.

    It started getting better without us directly touching it.


    Then I Became the Bottleneck Again

    The more it improved, the more suggestions it generated.

    And suddenly I was back in the loop.

    Reviewing every change. Approving every improvement.

    Different work. Same bottleneck.

    So we added auto-merge with confidence thresholds.

    If the system was confident enough, it could ship its own improvements.

    Now Dumb Eric updates himself.


    Somewhere Along the Way, This Stopped Being a Product Workflow

    At this point, what we had wasn’t just a better dev process.

    It was a system that could build product end to end.

    Features. Documentation. Knowledge base updates.

    And then we extended it to content.

    Our course creation pipeline now looks exactly like our software pipeline:

    • Linear task with human inputs
    • AI writes brief, storyboard, research
    • human approves
    • AI writes scripts
    • human approves
    • AI produces content (voice, visuals, slides)
    • changes are made by prompting and re-rendering

    The stack is a mix of local models, animation tools like Rive, and code-based video editing with Remotion.

    But the important part isn’t the tools.

    It’s the workflow.


    What Disappeared

    We didn’t replace our old process with a better version.

    We made most of it unnecessary.

    No more:

    • PRDs
    • long fantasy roadmaps
    • sprint planning
    • story points
    • backlog grooming
    • manual releases
    • buying SaaS for everything
    • costly, manual course production workflows

    Course production used to be a real investment.

    Planning, scripting, recording, editing, revisions. It added up quickly in both time and cost.

    Now it runs through the same system as everything else.

    A course goes from idea → brief → script → production through a pipeline, with human approvals in the right places.

    The marginal cost of producing a course is basically zero.

    Not because the work disappeared.

    Because the system does it.


    What Surprised Me

    A few things I didn’t expect:

    Deterministic orchestration + LLM reasoning is far more effective than either alone.

    Agents don’t need to be that smart if the workflow is good.

    Self-improving systems actually work in practice.

    And the biggest one:

    The bottleneck isn’t building anymore.

    It’s human decision speed.


    What This Actually Is

    If I had to describe it in one sentence:

    It’s a system that builds product end to end.

    Right now, it’s a product factory.

    By the end of the year, it’ll be a company factory.


    How This Actually Happened

    This wasn’t designed upfront.

    It emerged.

    Every step came from removing pain:

    • too slow → use AI
    • too manual → build orchestrator
    • too rigid → add LLM reasoning
    • too fragile → add self-improvement
    • too dependent on me → add auto-merge

    Every time I became the bottleneck, I removed myself.


    Where to Start

    Don’t start by trying to build a system.

    Start by feeling the pain.

    Build something real first.

    For us, that meant:

    • using Claude Code skills
    • getting to a point where we could actually ship code through it
    • refining that process until it worked

    Only then did we start removing friction.

    If you don’t feel the pain, you’ll overbuild the system.


    The Shift

    The goal isn’t to use AI to build faster.
    It’s to build a system where product is the output, not the work.

    That’s the difference.

    Most teams are still trying to use AI inside their existing process.

    The real shift is building a system where the process disappears.

    Once you cross that line, everything changes.

  • What Do We Replace the Roadmap With?

    What Do We Replace the Roadmap With?

    I’ve never really loved roadmaps.

    Not because direction doesn’t matter. It does. But because past two quarters, they’ve almost never been real.

    In most companies I’ve worked at, Q1 is mostly accurate. Q2 starts to wobble. By Q3 you’re explaining why the world changed. By Q4 you’re rewriting the story.

    And yet we keep pretending.

    We build slide decks projecting certainty. We sequence features nine months out. We treat deviation like failure instead of reality.

    The older I get, the less patience I have for that.


    What We Did Instead

    Twelve years ago at Koddi, we didn’t really run the company off roadmaps.

    That wasn’t some philosophical stance. We were just a bunch of kids in our twenties trying to build a business.

    We knew the big projects. Adding advertising partners to our platform could take months. Infrastructure work took time.

    But we didn’t manage the company through a feature timeline.

    Every week, a small group of us would get in a room and ask one question:

    What do we have to do this week?

    That was it.

    Every week was essentially a reset. We looked at the state of the business and recalibrated around what mattered most right now.

    We didn’t optimize for everyone feeling productive because they were checking things off a roadmap.

    We optimized for impact.

    Now here’s the important part: it wasn’t chaos.

    We had a plan. It was just very simple.

    We had a monthly revenue target, and we almost always hit it. There was one month we didn’t. A lot of it was outside our control. But the entire company sat down and talked about what we needed to change to make sure that didn’t happen again.

    Our roadmap was basically one line:

    Hit revenue. Stay alive. Fuel growth.

    Everything else bent around that.


    The Constraint Was the Plan

    When revenue is the constraint, the conversation changes.

    You don’t argue about whether Feature A should come before Feature B because it’s “on the roadmap.”

    You ask:

    What moves revenue?
    What unblocks sales?
    What improves performance?
    What closes the gap?

    If we were on track, we doubled down.

    If we weren’t, we recalibrated.

    Planning wasn’t about predicting the year. It was about responding to the constraint.

    Nicholas, our president and a close friend, used to say:

    “Do better and bigger just happens.”

    At the time it sounded almost too simple.

    But that was the operating philosophy.

    Do better:

    • Improve conversion
    • Add partners
    • Fix performance
    • Close deals
    • Remove bottlenecks

    Bigger followed.

    We didn’t obsess over scaling before we had improved. We improved relentlessly, and scale emerged.

    A small group of kids in their twenties scaled that business to millions in revenue within a few years. Not because we had perfect planning. Not because we ran perfect Agile.

    Honestly, we didn’t really do Agile at all.

    We kind of just did work.

    Important work. Constraint-driven work. Weekly reset work.


    Planning Around the Real Constraint

    Looking back, the lesson wasn’t that roadmaps are useless.

    The lesson was simpler:

    The roadmap was never the plan.
    The constraint was.

    Great teams organize around the scarcest thing that actually matters.

    At Koddi, that constraint was revenue.

    That clarity allowed the team to move quickly, reset constantly, and focus on the highest leverage work each week.


    The Constraint Is Changing

    In an AI-native world, the constraint is shifting.

    Execution cost is collapsing. Prototypes can be built in days. Small teams can build things that used to require entire engineering departments.

    But a new constraint is emerging:

    Token capacity and compute.

    Every AI-native team now operates within some form of token budget or compute budget, whether they track it explicitly or not.

    My guess is most teams are dramatically underutilizing that capacity. They’re paying for tokens but not deploying them effectively.

    The real job of product teams may start to look like this:

    Make sure your token budget is being used on the highest leverage tasks possible.

    Shipping improvements.
    Learning faster.
    Running experiments.
    Removing bottlenecks.
    Improving the system itself.

    And as learning loops happen, what counts as “highest leverage” may change day to day.

    Just like revenue was the constraint at Koddi, tokens and compute are quickly becoming the constraint for AI-native teams.

    Planning becomes less about predicting features and more about allocating that capacity wisely.

    Even some of the largest AI companies hint at this kind of thinking. NVIDIA’s Jensen Huang has said their long-term plan is largely defined by what they’re doing today and how they adapt as the landscape changes.

    That mindset sounds a lot like the weekly resets we used to run at Koddi.


    Planning in the Loop Era

    This isn’t anti-planning.

    Direction should be durable.

    But tactics should be fluid.

    You should know:

    • where you’re heading
    • what matters financially
    • what could kill the company

    But pretending you can accurately sequence the next twelve months of work in a fast-moving environment doesn’t make you disciplined.

    It makes you attached.

    Small and mighty teams don’t win because their roadmap was accurate.

    They win because they turn the highest leverage levers every single week.

    They know the constraint.
    They reset constantly.
    They do better. And bigger happens.

  • Restart Beats Refactoring in the Agent Era

    Restart Beats Refactoring in the Agent Era

    Yesterday our AI agents attended a small funeral.

    The one we buried was named Dumb Eric.

    He had been shipping code for days. Running development loops through the night. Opening pull requests while I slept.

    But another agent on our team had surpassed him.

    So we shut Eric down and rebuilt him from scratch.

    (The cover photo is Dumb Eric. An old MacBook Pro sitting slightly lopsided in a chair. That’s where he lives right now while we work out the kinks and figure out the long-term infrastructure.)


    Dumb Eric was the first agent I built to orchestrate our development workflow.

    A deterministic loop that could move through tickets, generate code, open PRs, and keep working without me sitting there babysitting it.

    At first it worked exactly the way I hoped.

    I could go to sleep, wake up, and find a stack of pull requests ready for review.

    The system was doing real work.

    Then another agent showed up.

    Steve, our CTO, built one named Charlie.

    And Charlie was better.

    Cleaner architecture.
    Better task handling.
    Fewer strange edge cases.

    So I did what most engineers instinctively do when their system starts falling behind.

    I tried to refactor.


    Refactoring feels productive.

    You keep the system you built.
    You improve pieces of it.
    You convince yourself you’re preserving momentum.

    But the deeper I went, the worse it got.

    Edge cases started piling up.
    Old decisions collided with new improvements.
    Half-finished fixes layered on top of older assumptions.

    Eventually I realized something uncomfortable.

    I wasn’t improving the system.
    I was protecting my past work.


    One of the principles I’ve been writing about with Loop is simple.

    Restart beats refactoring.

    For most of software history, restarting was the dangerous option.

    Rewrites were slow.
    Expensive.
    Risky.

    But the economics of building software have changed.

    When agents can generate large portions of a system quickly, the cost of rebuilding drops dramatically.

    So instead of continuing to patch Eric, I did something different.

    I had Charlie fork himself.

    Then generate a new Eric.


    Within an hour Eric was running again.

    Opening PRs.
    Moving through tickets.
    Doing exactly what he was supposed to do.

    And the lesson was obvious.

    Restarting was faster than refactoring.


    This isn’t just true for agents.

    We’re seeing the same pattern at a much larger scale.

    Right now we’re rebuilding the entire Psych Hub training platform.

    The previous system had accumulated years of constraints. Architecture that was hard to change. Features layered on over time. The usual gravity that slows teams down.

    Instead of carefully refactoring our way forward, we restarted.

    In roughly 4–6 weeks, we rebuilt the core platform and added more than a year’s worth of features customers had been asking for.

    At this point we’re mostly working through edge cases.

    But the bigger win is the foundation.

    The system is clean again.
    The architecture is easier to extend.
    And the team moves faster.

    Restarting gave us back our speed.


    For decades, software teams were taught to avoid rewriting systems at all costs.

    Protect the existing system.
    Refactor carefully.
    Preserve what you have.

    That advice made sense when rebuilding software took months or years.

    But the agent era changes the math.

    When building becomes dramatically faster, the cost of starting over collapses.

    And when the cost of rebuilding approaches zero, a different strategy starts to make sense.

    Sometimes the fastest way forward isn’t fixing the past.

    It’s letting it go and building something better.


    Dumb Eric died yesterday.

    Eric 2.0 shipped an hour later.

    And the lesson applies to far more than agents.

  • The Blurred Lines

    The Blurred Lines

    Lately I’ve noticed something happening on our team that would have felt strange a few years ago.

    Me, our CTO, and our Director of Engineering are all doing essentially the same thing.

    Not identical work. Different strengths. Different lenses. But overlapping execution in a very real way.

    Steve, our CTO, is deep in the weeds testing tools, pushing on new skills, and experimenting with fully autonomous agents. I’m focused more on product feel, interactions, and how things actually come together in the experience. Bryan is dialed in on security, consistency, and quality.

    But we’re all pushing code.

    And the truth is, I don’t think we could be shipping nearly as fast right now if we tried to keep the lines clean.

    This would have been weird before

    Historically, there were clear boundaries for a reason.

    Product defined the problem.
    Engineering built the solution.
    CTO focused on architecture, systems, and long-term direction.

    And honestly, there was a time when engineering not wanting product anywhere near the code made total sense. It protected quality. It protected consistency. It kept ownership clear.

    But AI-assisted development is changing the cost of contribution.

    Now a product leader can meaningfully contribute to real implementation. Engineers are shaping product decisions in real time, not just reacting to specs. The CTO is in the tools every day, testing what’s possible and pushing the edge of how we build.

    The distance between idea and execution is collapsing.

    And with that, the old boundaries start to get in the way.

    We’re faster because the lines are blurred

    The biggest realization for me has been this:

    We could not be moving at this pace if we tried to preserve traditional role separation.

    Agentic development thrives on momentum. It rewards people who can see a problem, take a swing at it, and move it forward without waiting for a handoff.

    If every idea has to travel through a clean chain of:
    Product → Spec → Engineering → Review → Ship

    You lose time. You lose context. You lose energy.

    But when the same group of people can:

    • spot the problem
    • sketch the solution
    • try something
    • refine it
    • and ship it

    You compress weeks into days.

    That’s what we’re seeing right now.

    It takes trust. And it’s uncomfortable.

    This only works because there’s a lot of trust.

    We’ve worked together across multiple companies for 15+ years. We know how each other thinks. We know each other’s strengths. We know when to push and when to step back.

    And if I’m being honest, it takes some uncomfortable acceptance. Especially on the engineering side.

    Letting a product person contribute to the codebase isn’t a small shift. It challenges old instincts. It can feel risky. It can feel messy.

    But the reality is, the best ideas have always come from when the lines blur a bit.

    At Wondr, one of our engineers, Chase, had ideas that directly improved mobile app adoption. Not theoretical improvements. Measurable impact. Those ideas didn’t come from a spec. They came from being close to the problem and thinking like a product person while building like an engineer.

    That kind of cross-pollination isn’t new.

    What’s new is that now we’re not just sharing goals.

    We’re sharing execution.

    This isn’t for everyone

    I don’t think this model works everywhere. At least not yet.

    It probably breaks down in:

    • Large organizations
    • Low-trust environments
    • Teams that haven’t worked together long
    • Places without strong systems and safeguards

    If you don’t have maturity, clear workflows, and mutual respect, blurred lines can quickly turn into chaos.

    But for small, experienced, high-trust teams, the upside is massive.

    You get:

    • More velocity
    • Faster learning loops
    • Better ideas
    • More ownership
    • Less waiting

    And if you have the right guardrails in place around security, quality, and consistency, you can move fast without breaking everything.

    The future feels different

    This is one of the bigger shifts I’m seeing up close.

    It’s not just that AI helps engineers code faster.

    It’s that the definition of who builds is starting to change.

    Product leaders can execute.
    Engineers can shape product in real time.
    CTOs are experimenting directly in the tools.

    The roles don’t disappear. The strengths still matter. The lenses are still different.

    But the lines between them are getting harder to see.

    And in the right environment, that’s a good thing.

    The best small teams aren’t just aligned on goals anymore.

    They’re aligned in execution.

  • Small Teams Don’t Die From Bad Ideas. They Die From Slow Ones.

    Small Teams Don’t Die From Bad Ideas. They Die From Slow Ones.

    I’ve seen more teams struggle from moving too slowly than from making the wrong call.

    Not because they were lazy.
    Not because they didn’t care.
    Usually the opposite.

    They cared so much that everything had to be right before anything could ship.

    The problem is that early-stage companies don’t have the luxury of certainty. You have a runway. You have limited time. You need to learn fast enough to survive. If you take a year to carefully validate every decision, you may not have a company left by the time you’re confident.

    Bias toward action is not a personality trait. It’s a survival skill.


    The startup that never got the chance to learn

    I once started a company with a group of 6 cofounders. Half of us were product and tech. The other half were responsible for the program and content. The idea was a mobile app focused on stress and emotional health. A digital program with video content. Something that could genuinely help people.

    In the early days, it was great. We were forming the business, building mockups, getting the first version of the program off the ground. It felt like momentum.

    Then the pace started to shift.

    The content side became deeply focused on getting everything exactly right. Scripts were reviewed and rewritten over and over. Research was double checked. I remember one person who looked like they had been up for days, buried in papers, making sure every claim could be defended.

    It came from a good place. They cared about the quality. They wanted to stand behind it. They viewed this as releasing something deeply personal into the world.

    But it slowed us down in a way that started to matter.

    Getting the v1 content created took 4 to 5 months. When it was finally done, it felt like the finish line to them. I kept saying, this is the starting line. We’re about to learn all the things we need to change. We need to become a content factory while we’re a product factory.

    At the same time, we started debating smaller and smaller things. The shade of red in the design. The icon used on a screen. We would get aligned, then a week later the topic would come back up again. It was as if the product had to be perfect before it could exist.

    To be fair, the product side wasn’t perfect either. We took too long building the app. This was one of the first times I leaned heavily on AI to speed up development because we just needed to get something working. We could not spend months polishing something that hadn’t even met a real user yet.

    Eventually we got to the point where we shared the product with friends and family. And then we shut it down.

    Not because the idea was bad. Not because we ran out of money. We just couldn’t agree on how to operate.

    I wanted to move fast, learn from real users, take criticism, and iterate. The other side wanted to feel fully confident before releasing anything. They wanted it to be something they could stand behind completely.

    Both perspectives were understandable. But they were incompatible.

    It was either going to break before revenue or explode after it. We chose to stop early.

    Looking back, I still believe the idea had potential. And if we were building it today, the app could be delivered in record time. The program could be produced at a high quality in record time. The tools are better now. The speed is there.

    But speed only helps if the culture supports learning through motion.


    What bias toward action actually means

    Bias toward action is not recklessness.

    It’s not betting the farm.
    It’s not skipping thinking.
    It’s not ignoring risk.

    It’s making decisions without 100 percent information.
    It’s placing small bets.
    It’s learning through motion.
    It’s accepting that small mistakes are part of the process.

    You can move fast and still be responsible. You can create offramps. You can mitigate risk. You can test in controlled ways.

    The key is to stop waiting for perfect clarity before you start.

    Clarity comes from doing.


    Time is the real constraint

    Early stage teams don’t fail because every idea was wrong. They fail because they ran out of time before they learned enough.

    You have a runway.
    You have limited shots on goal.
    Your metrics are not where they need to be yet.
    Your next round is not guaranteed.

    If you take your time to feel confident in every decision, you may not have a company left to worry about.

    That sounds dramatic, but it’s the reality. Startups are a race against time, not a quest for perfection.

    And the pace of building and testing ideas is only accelerating. The cost to prototype, launch, and iterate has collapsed. The teams that learn the fastest will win. The teams that wait for certainty will fall behind.


    What slow cultures feel like

    You can feel a low-action culture almost immediately.

    Every decision needs approval.
    Small changes turn into meetings.
    Topics get revisited again and again.
    People start waiting instead of acting.

    I’ve seen environments where smart, capable people hesitate to change a single word on a web page because they’re worried about getting questioned later. Over time, that kills initiative. People stop thinking like owners. They start thinking like operators.

    Overthinking strips away autonomy. And without autonomy, you don’t get momentum.

    You get a lot of discussion.
    You get a lot of planning.
    You don’t get much movement.


    What high-action cultures look like

    I saw the opposite early in my career at Koddi. I was employee number one. We were bootstrapped. It was me, the president, and a small development team.

    We did everything. Product. QA. Sales. Customer support. Whatever needed to get done.

    We were relentlessly focused on moving fast with purpose. It wasn’t chaotic. It was disciplined. We removed distractions. We obsessed over making customers love us. We responded quickly. We solved problems quickly. We looked for the next opportunity before anyone asked.

    We grew fast. We stayed profitable. It felt like Seal Team training in execution.

    Later on, at Wondr, I saw how speed creates clarity.

    We saw data that showed mobile app users had significantly better weight loss success and adherence. That directly tied to revenue. Because we had autonomy, we shifted focus immediately. We pushed hard on mobile adoption and made a real impact in a short amount of time.

    At a lot of companies, that insight would have been recorded, discussed, prioritized, and maybe addressed months later. We just acted.

    Because we had autonomy, we shifted immediately. Speed turned insight into revenue.

    Plans matter. Roadmaps matter. But smart people need the space to break the plan when reality changes.


    Practical ways to build a bias toward action

    This isn’t about slogans. It’s about behaviors.

    Default to trying.
    Run small experiments.
    Shorten decision loops.
    Kill zombie discussions that keep resurfacing.

    You don’t need to solve everything at once. You just need to learn faster than the problems are growing.


    You will never feel fully ready

    There will always be one more thing to refine. One more thing to validate. One more expert opinion to get.

    If you wait until everyone feels completely confident, you will wait forever.

    Bias toward action doesn’t mean you don’t care about quality. It means you care about learning. It means you accept that the first version will be imperfect. It means you trust that you can improve once reality starts pushing back.

    Speed creates clarity.
    Action reveals truth.
    Momentum builds belief.

    Small teams rarely die from bad ideas.
    They die from not moving fast enough to find the right one.

  • Why I’m Less Afraid of AI Breaking Things Than Slow Teams

    Why I’m Less Afraid of AI Breaking Things Than Slow Teams

    Last week, an AI agent deleted our staging database.

    Not corrupted it. Not partially broke it. Deleted it.

    To its credit, it immediately owned the mistake. No hedging. No confusion. Just a clear apology and a summary of what happened.

    The good news? We had staging fully restored in about 20 minutes.

    That’s the part that stuck with me – not the failure, the recovery.

    The real lesson wasn’t that AI can make big mistakes. We already know that. The lesson was how little damage it actually caused in a team designed to move quickly.

    And it reinforced something I’ve been thinking about for a while:

    I’m less afraid of AI breaking things than I am of slow teams.

    Breakage is inevitable. Slowness is fatal.


    This isn’t an argument for recklessness

    Let’s be clear. Speed without guardrails is chaos.

    If you’re going to let AI operate in your repo, you need containment: proper environment separation, scoped permissions, protections against destructive commands, version control discipline, backups you’ve actually tested, and visibility into what’s being executed.

    That staging incident didn’t hit production – and that wasn’t accidental. That’s guardrail design.

    The goal isn’t “let it break things.” The goal is to reduce blast radius when it does. Because something eventually will.


    Software has always broken

    Humans push bad code. Servers go down. Migrations go sideways. Someone drops a table.

    None of that is new.

    What’s different now is speed. AI can investigate, modify, and execute faster than any junior engineer – sometimes faster than a senior one. Yes, that includes making mistakes faster. But it also includes fixing them faster.

    If your team can detect, diagnose, and recover quickly, most mistakes become small events. Annoying, but survivable. Sometimes even valuable.

    If your team moves slowly, even small problems turn into drawn-out, expensive disasters.

    The risk isn’t just that something breaks. The risk is that you can’t respond when it does.


    The real safety net is recovery speed

    We like to think safety comes purely from prevention: more approvals, more documentation, more review layers, more caution.

    Process matters. Especially in healthcare. Especially when people are involved.

    But process without recovery capability creates fragility, not safety.

    The older I get, the more I believe the real safety net is recovery speed.

    Do you have backups? Can you restore quickly? Can your team jump in and solve the problem without a week of meetings? Can you move forward again the same day?

    If the answer is yes, a lot of scary things become manageable.

    That staging incident could have been a nightmare if we didn’t have our fundamentals in place. Instead, it was a 20-minute disruption and a useful forcing function.

    That’s not luck. That’s posture.


    AI just exposes what was already true

    AI didn’t create this dynamic. It just made it more obvious.

    Fast teams have always had an advantage. Now the gap is widening.

    If you can prototype quickly, test quickly, recover quickly, and iterate quickly, you can afford to take more swings. You learn faster. You adapt faster. You improve faster.

    If you can’t, every mistake feels existential. So you slow down. You overanalyze. You try to control everything.

    And that’s where the real danger lives – not in breakage, but in paralysis.


    Slow teams don’t feel slow. They feel careful.

    This is the tricky part.

    No one thinks they’re moving slowly. It feels responsible. It feels thoughtful. It feels like protecting the company.

    I’ve been part of teams that spent weeks planning something that could have been tested in a day — long docs, endless alignment, multiple rounds of discussion, careful sequencing.

    By the time we shipped, the world had already moved on.

    In a startup, that kills you quietly.

    You don’t lose because of one catastrophic mistake. You lose because you learn too slowly.


    My fear has shifted

    A year ago, the idea of an AI autonomously making changes to a repo would have made me uneasy.

    Now? I’m still cautious. But I’m not nearly as afraid.

    Because I’ve seen what happens when you pair speed with guardrails: staging environments, backups, version control, clear ownership, tight feedback loops, and limited blast radius.

    When those things are in place, most mistakes are recoverable.

    What worries me more is the opposite environment – weeks to make a decision, months to ship something small, fear of touching anything, endless discussion with little movement.

    That’s the kind of system that slowly suffocates a company.


    The bar is changing

    We’re entering a phase where speed and quality aren’t the same tradeoff they used to be.

    AI can help investigate bugs in minutes. It can reason through codebases quickly. It can draft, test, and refine faster than we’re used to.

    But all of that only matters if the team is willing – and structurally able – to move.

    The teams that win won’t be the ones that avoid every mistake.

    They’ll be the ones that detect issues quickly, contain them, recover quickly, learn quickly, and keep going.


    I’d rather recover fast than move slow

    That staging incident didn’t make me want to lock everything down.

    It reinforced something I already believed.

    If you’re letting AI operate in your environment, you should absolutely be thoughtful. You should design guardrails. You should respect the risks.

    But you should also build a team and a system that can take a hit and keep moving.

    Breakage will happen – with humans, with AI, with both.

    The companies that survive won’t be the ones that prevent every failure.

    They’ll be the ones that design their systems so failures are contained – and recover faster than anyone else.

  • Prototyping > Vibe Coding

    Prototyping > Vibe Coding

    I’m not a fan of the term vibe coding. It sounds sloppy, unserious, and a little too close to “just prompt until something happens.” But the underlying idea, compressing the distance between an insight and something real you can interact with, is one of the most important shifts happening in product right now.

    For years, the path from idea to reality was slow and structured.

    Insight → wireframes → mockups → revisions → stakeholder input → engineering handoff → first working version → realization that we missed a lot.

    That cycle could take weeks. Sometimes months. And it often meant we were making major decisions based on static artifacts instead of something people could actually use.

    Now we can collapse most of that into hours.

    Tools like Lovable, Replit, and v0 let you spin up working prototypes in minutes. You can interact with them, tweak them, rethink flows, and explore directions before anyone commits to building production code. While these tools can generate full applications, I think their real sweet spot today is helping product teams and stakeholders spec projects faster and more clearly than ever before.

    Instead of describing the product, you can just show it.

    And more importantly, you can use it.

    From Spec to Something Real

    On our current platform project, we built a comprehensive prototype of the future product in about 20 hours and roughly $200. A few years ago that would have taken months and $30–40k in external design and engineering support just to reach the same level of clarity.

    But the part that really changed things wasn’t just the speed.

    We synced that prototype to GitHub. Then we had agents port the prototype pages into our actual codebase. At that point, the conversation shifted from:

    “Here’s a Figma file. Go build this.”

    to:

    “Here’s the UI. Your job now is to make this page work.”

    That’s a completely different starting point.

    The prototype becomes the spec.

    It already contains layout, interactions, component structure, and intent. Agents building against it already understand how things are supposed to behave. This is especially effective if your production stack shares the same front-end patterns. In our case, we chose building the with React, Tailwind, and shadcn because that is usually the default ecosystem most of these prototyping tools generate against. That alignment makes the handoff from prototype to production much smoother.

    The Tradeoff: Sameness

    Of course, there are tradeoffs.

    You can already see a certain sameness creeping into modern apps. Many AI-built interfaces use the same component libraries with slight visual variation. Dashboards start to look familiar. Patterns repeat.

    That’s the cost of speed.

    But it’s also the reason agents can move so fast. Standardized stacks mean less friction, less guesswork, and more momentum.

    And honestly, in early stages, speed matters more than visual originality. You’re trying to find something that works. Something useful, clear, and valuable. You can differentiate later.

    Why These Tools Matter Right Now

    Right now, tools like Lovable and Replit are incredible for getting an idea off the ground and iterating until something feels right. Not perfect. Not finished. Just right enough to build.

    One interesting thing I’ve noticed in my own workflow is how prompting style affects outcomes.

    Early on, I was extremely prescriptive. I’d use ChatGPT or Claude to generate long, detailed markdown instructions describing exactly what to build, then feed that into the prototyping tool. That works. Sometimes it’s necessary, especially when you have a clear vision.

    But I’ve also found value in being intentionally vague.

    Prompts like “make the dashboard useful” or “this page feels empty” can lead to unexpected ideas. Layouts, features, or small touches we hadn’t thought of. Occasionally, those end up being the best parts.

    There’s a balance there. Direction matters. But leaving room for interpretation can surface new thinking.

    The Real Shift

    To me, that’s what modern prototyping really is.

    It’s not about replacing design.
    It’s not about skipping engineering.
    And it’s definitely not about “vibes.”

    It’s about moving the moment of truth earlier.

    Instead of debating ideas, you interact with them.
    Instead of writing long specs, you explore working versions.
    Instead of waiting weeks to learn what you missed, you find out the same day.

    And once you have something real, agents can take it the rest of the way.

  • Why Agile Breaks in the Agent Era

    Why Agile Breaks in the Agent Era

    Agile was one of the most important shifts in how software teams work. It replaced rigid planning with iteration, feedback, and collaboration. It helped teams ship faster, learn sooner, and waste less effort.

    But Agile was designed around a core assumption:

    Humans do most of the execution.

    That assumption is now false.


    The assumption Agile is built on

    Agile practices – sprints, story points, backlog grooming, standups, all exist to solve a specific problem:

    How do we coordinate groups of humans doing complex, expensive work?

    When humans write code:

    • execution is slow
    • changes are costly
    • mistakes compound
    • coordination is necessary

    Agile optimizes around that reality.
    Planning reduces waste.
    Small increments reduce risk.
    Commitments help teams align.

    All of that makes sense—if humans are the bottleneck.


    What changed

    AI agents can now:

    • implement complex features
    • refactor codebases
    • write and fix tests
    • iterate rapidly with minimal cost

    Execution is no longer scarce.

    The constraint has moved.

    The hardest part of building software today is not typing code – it’s:

    • deciding what to build
    • deciding what quality looks like
    • deciding when something is “good enough”
    • deciding what to do next based on what you see

    In other words: judgment.

    Agile does not optimize for judgment. It optimizes for coordination.


    When execution becomes cheap, planning becomes noise

    In an agent-driven world:

    • estimating work is guesswork
    • sprint commitments become artificial constraints
    • backlogs grow faster than they’re resolved
    • teams plan more than they learn

    You can feel this tension already.

    Teams say they “do Agile,” but:

    • they skip estimates
    • they ship outside sprint boundaries
    • they prototype first and plan later
    • they use AI tools to bypass the process

    Agile hasn’t failed. Its assumptions have expired.


    The new bottleneck is review, not execution

    When agents can produce working software quickly, risk shifts downstream.

    The question is no longer:

    “Can we build this?”

    It’s:

    “Is this the right thing, at the right quality, at the right time?”

    That question can’t be answered in planning meetings.

    It can only be answered by:

    • seeing working software
    • interacting with it
    • judging it in context
    • learning from real signals

    This is why review beats planning in the agent era.


    Why sprints stop making sense

    Sprints exist to batch work into predictable intervals.

    But when:

    • execution is fast
    • changes are cheap
    • learning happens continuously

    Fixed cadences become friction.

    Work doesn’t finish because the sprint ends.
    It finishes when a decision is made.

    In practice, teams already know this—they ship when things are ready and pretend the sprint mattered.


    Agile optimizes throughput. The agent era demands judgment.

    Agile measures:

    • velocity
    • story completion
    • predictability

    But those metrics assume:

    • humans are the workers
    • output is the goal
    • efficiency is the constraint

    In agent-first teams:

    • output is abundant
    • efficiency is cheap
    • bad decisions are the real cost

    Optimizing throughput without improving judgment just means shipping the wrong thing faster.


    This isn’t about abandoning Agile values

    Agile’s values like collaboration, adaptability, and customer focus still matter.

    What breaks is the mechanism.

    The tools, rituals, and language of Agile reflect a world where:

    • execution is expensive
    • planning is protection
    • small increments are mandatory

    That world is disappearing – And it’s happening fast.


    What replaces it

    In the agent era, teams need a system that:

    • treats execution as cheap
    • treats judgment as scarce
    • prioritizes review over prediction
    • measures learning over commitment
    • adapts rigor based on risk

    This is why new models are emerging – whether teams name them or not.

    Some are already working this way.
    Most just don’t have language for it yet.


    The transition is already happening

    You can see it in:

    • prototype-first product work
    • AI-driven implementation
    • features built in single passes
    • planning rituals quietly abandoned
    • review and iteration happening in real time

    The gap is not practice.
    It’s the operating model.


    A final thought

    Agile replaced Waterfall because the world changed.

    The agent era is another such moment.

    When execution is cheap,
    planning loses power,
    and judgment becomes everything.

    The teams that recognize this early will build faster, better, and with fewer compromises.

    The rest will keep coordinating work that no longer needs coordination.