Code Is Now Cheap. Waste Still Isn’t.

You can compress everything a product manager does (at least in lean companies) down to two statements, without too much loss:

1. Ensure that the company is building the right thing

2. Ensure that it only builds as small a piece as possible to get a clear signal on how much more of it is worth building

During my time working on Seller Growth Tools at Skroutz and in previous roles, the delivery cadence we followed looked something like this:

Phase I. We’d define the minimum viable changes on the Buyer’s side. Often we’d restrict these to the SKU page, as touching earlier stages of the user journey also required work on Elasticsearch, prohibitively increasing development cost. We’d also recruit Sellers for the pilot. Their willingness to participate was a core input in our decision—and a good proxy for adoption potential. If recruiting proved challenging and we couldn’t make our pilot shops succeed at such a small, controlled scale, we scrapped the idea and went back to the drawing board.

Phase II. If the pilot proved successful, we’d make the feature generally available. We’d start with a basic tool that might lack functionality many would consider table stakes—like editing (i.e. you could create or pause deals but not edit existing ones) or scheduling.

Phase III. The next few increments would complete the feature. These usually looked like mass actions (e.g. mass creating deals through CSV imports), elements that enhanced discoverability earlier in the user journey, or advanced functionality (e.g. allowing combinations of different products in quantity discounts). The incremental delivery ensured that we were always working on the most valuable issue, minimizing waste.

Discussions of modifying this tried-and-true approach only became relevant after LLMs reached a critical degree of autonomy and code quality last December.

Now that code approaches zero marginal cost, why stagger delivery into pieces? Releasing the entire feature should, in theory, give you a much stronger signal on its market fit.

For me the question is not theoretical. A new portfolio company is currently working on an AI assistant, with the initial implementation including:

1. Core chat functionality that will theoretically cover every question type from the outset

2. Tool calling that will allow the agent to perform actions for the user

3. A full billing system that allows users to self-serve and purchase credits

4. A model selector

If the speccing work has broadly always been frontloaded, and generating the extra code costs near-to-nothing, why not go all the way in v1?

Code cost was only one component of the potential waste—and arguably not even the dominant one. Signal ambiguity and attention allocation matter at least as much. More importantly, staggered delivery was never just a means to control production cost. It was THE way to learn and shape the product.

Identifying and prioritizing the riskiest assumptions minimizes confounding variables. If we’d released the Skroutz discounting tools without a pilot, made the tool generally available, and met with underwhelming adoption, we’d have had to wade through many competing hypotheses on the point of failure. Were merchants unable or unwilling to further discount their products? Did they try any funny business, like increasing the nominal price and offering a coupon to trick consumers? Did consumers understand that the coupon applied to a specific product and was immediately redeemable?

Similarly, if we release the AI assistant fully formed and adoption disappoints, it will be much harder to know whether the core chat falls flat because it doesn’t solve a real problem, we’re prioritizing the wrong channels, or simply our largely non-technical customers can’t navigate the pricing model. It will be harder to determine which pieces of our solution are solid—and we should thus keep—and which we should reconsider.

None of this means the delivery cadence should stay the same. Our increments can and should get chunkier. Skipping the edit capability in a v1 tool is no longer on the menu. But anything past our key assumption’s horizon is potential waste, no matter how fast we can produce it.

This is also no excuse to accept the product manager’s ability to digest and analyze feature performance as the system’s new bottleneck. Instead of delivering end-to-end features sequentially (which I’d argue are still better served released over the span of a few days rather than all at once), we can parallelize our work—placing the same small number of chips on more bets.

After all, the end state of a successful product org (and the one I’m steering Starttech’s founders toward) is one where the product manager keeps the team inundated in context, allowing engineers to handle implementation end to end—including the crucial task of breaking features down into atomic, valuable, and sound increments.

Rafael Papagiannis Rafael Papagiannis -