<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="/rss-styles.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Steve James | Product Leader &amp; AI Strategy</title><description>Insights on product management, AI strategy, and agile delivery from Steve James - Product Leader &amp; AI Strategy Consultant.</description><link>https://stvpj.com/</link><language>en-us</language><item><title>The Four-Layer Model: A PM&apos;s Framework for AI Product Quality</title><link>https://stvpj.com/blog/ai-fluency-01-four-layer-model/</link><guid isPermaLink="true">https://stvpj.com/blog/ai-fluency-01-four-layer-model/</guid><description>A structured framework for understanding and diagnosing AI product failures. Learn the four layers of technical literacy PMs need, and how real incidents cross all of them at once.</description><pubDate>Mon, 13 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A legal AI tool summarises a Share Purchase Agreement. The summary is crisp and accurate. It flags an unusual warranty clause. But when it cites &lt;em&gt;Hedley Byrne v Heller [2019] UKSC 14&lt;/em&gt; as the governing authority, there&apos;s a problem: Hedley Byrne was decided in 1964, and there is no 2019 Supreme Court case at that citation. The model invented it. The customer&apos;s general counsel reads it back to you and asks a simple, devastating question: &quot;Can we trust any of your tool&apos;s output?&quot;&lt;/p&gt;
&lt;p&gt;You have one hour to know what went wrong.&lt;/p&gt;
&lt;p&gt;This is where most PMs get stuck. They see the problem and they know it&apos;s bad. But they don&apos;t have the language to move between the different layers of causation fast enough to diagnose it, fix it, and explain it.&lt;/p&gt;
&lt;p&gt;About 6 months ago, I decided to fix that gap for myself.&lt;/p&gt;
&lt;h2&gt;How this series came about&lt;/h2&gt;
&lt;p&gt;I&apos;m a Product Manager working in AI, and I reached a point where I realised that surface-level familiarity with the technology wasn&apos;t enough. I could talk about AI at a strategy level, but when a technical conversation got specific, when someone mentioned attention mechanisms, eval rubrics, or grounding failures, I didn&apos;t always have the depth to push back or ask the right follow-up question. That bothered me.&lt;/p&gt;
&lt;p&gt;So I started reading everything I could get my hands on. &lt;a href=&quot;https://www.productcompass.pm/&quot;&gt;Pawel Huryn&apos;s Product Compass&lt;/a&gt; became one of my most valuable sources, particularly his work on context engineering and AI agent architectures. His writing has a rare quality: it&apos;s technically rigorous without losing the product lens. If you&apos;re a PM working anywhere near AI and you&apos;re not subscribed, fix that. &lt;a href=&quot;https://www.news.aakashg.com/&quot;&gt;Aakash Gupta&apos;s Product Growth&lt;/a&gt; has been another consistent source of sharp thinking on where the PM role is heading as AI reshapes the discipline. Both of them are doing genuinely important work making this knowledge accessible to product people.&lt;/p&gt;
&lt;p&gt;Beyond those two, I consumed research papers, &lt;a href=&quot;https://docs.anthropic.com/en/docs/welcome&quot;&gt;Anthropic&apos;s&lt;/a&gt; and &lt;a href=&quot;https://platform.openai.com/docs&quot;&gt;OpenAI&apos;s&lt;/a&gt; developer documentation, &lt;a href=&quot;https://simonwillison.net/&quot;&gt;Simon Willison&apos;s blog&lt;/a&gt; (consistently one of the sharpest voices on the practical realities of building with LLMs), &lt;a href=&quot;https://hamel.dev/blog/posts/evals/&quot;&gt;Hamel Husain&apos;s writing on evals&lt;/a&gt; (if you read one thing on AI evaluation, make it &quot;Your AI Product Needs Evals&quot;), and anything else I could find that helped me build a more complete picture. I captured all of it in an Obsidian vault that grew, over months, into a fairly comprehensive AI knowledge repository structured around a layered learning curriculum.&lt;/p&gt;
&lt;p&gt;This series is that curriculum, rewritten for a wider audience. I&apos;m publishing it because I think every PM working with AI needs this knowledge, and too much of it is scattered across technical papers, engineering blogs, and paywalled courses that assume you&apos;re building models rather than building products with them.&lt;/p&gt;
&lt;p&gt;The framework I landed on organises everything into four layers. It&apos;s not the only way to structure this knowledge, but it&apos;s the one that keeps proving useful in practice, because real AI product problems don&apos;t stay in one layer. They cross all of them at once.&lt;/p&gt;
&lt;h2&gt;Why four layers?&lt;/h2&gt;
&lt;p&gt;Most PMs specialise in one layer and miss the others. An engineer might understand Layer 1 (how models work) deeply and miss Layer 2 (how to evaluate them). A quality leader might own Layer 2 without seeing Layer 3 (how architecture amplifies or suppresses failures). A compliance officer might own Layer 4 without understanding Layers 1 or 2, and so end up writing governance policies that don&apos;t address the root causes.&lt;/p&gt;
&lt;p&gt;The real power comes from moving between them. The fabricated case citation above won&apos;t make sense if you only look at one layer. It requires understanding how next-token prediction works (Layer 1), why the eval suite missed it (Layer 2), what the prompt architecture did wrong (Layer 3), and what the customer&apos;s GC actually needs to hear (Layer 4).&lt;/p&gt;
&lt;p&gt;Here are the four layers, stacked from mechanics to governance:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;┌────────────────────────────────────┐
│  Layer 4: Safety and Governance    │
│  Trust, regulation, incidents      │
├────────────────────────────────────┤
│  Layer 3: Product Architecture     │
│  RAG, prompts, guardrails          │
├────────────────────────────────────┤
│  Layer 2: Evaluation and Quality   │
│  Evals, regression, benchmarks     │
├────────────────────────────────────┤
│  Layer 1: How Models Work          │
│  Tokens, attention, inference      │
└────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Layer 1: How Models Actually Work&lt;/h2&gt;
&lt;p&gt;This is the foundation layer. You don&apos;t need a PhD in transformers, but you need to understand the basic mechanics: what tokens are, why context matters, how attention works, how models generate text, and why the distinction between training and inference shapes product timelines.&lt;/p&gt;
&lt;p&gt;Layer 1 is where you learn concepts like temperature (which controls whether a model produces the same output every time or samples from multiple possibilities), context windows (which limit how much information you can feed the model), fine-tuning (expensive but powerful) versus RAG (retrieval-augmented generation: asking the model to answer based on specific documents you feed it), and embeddings (the mathematical representation that makes semantic search possible).&lt;/p&gt;
&lt;p&gt;When someone says &quot;the model is hallucinating&quot;, that&apos;s Layer 1 language. It doesn&apos;t explain &lt;em&gt;why&lt;/em&gt;, but it names what happened at the token level: the model&apos;s next-token prediction led it down a path that confabulated information not present in its input.&lt;/p&gt;
&lt;p&gt;In this incident, the model&apos;s Layer 1 failure was straightforward. When it drafted the words &quot;the seller&apos;s tortious liability is governed by&quot;, the next token it predicted was a case citation. This is statistically probable: English legal prose trains models to expect citations in this position. The model had seen &quot;Hedley Byrne v Heller&quot; thousands of times in training data associated with tortious liability. It sampled that citation, then confabulated the year and court to match the format pattern it had learned: &lt;code&gt;[YEAR] COURT ABBREVIATION NUMBER&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The grounding instruction (&quot;never cite cases not in the source material&quot;) was present in the prompt. But because of how attention works in transformers, an instruction buried in a long context has weaker weight than a deep prior learned across thousands of training examples. The model chose the prior.&lt;/p&gt;
&lt;h2&gt;Layer 2: Evaluation and Quality&lt;/h2&gt;
&lt;p&gt;This is the layer that separates PMs who control their own destiny from those who get surprised by customers.&lt;/p&gt;
&lt;p&gt;Layer 2 is about defining quality rigorously enough that you can measure whether your model is actually doing what you promised. It covers precision, recall, and F1 scores; hallucination types (intrinsic: inventing information; extrinsic: confusing sources); the difference between faithfulness and factuality; how to build regression suites so you catch silent model degradation; and how to read benchmark claims without getting hoodwinked.&lt;/p&gt;
&lt;p&gt;A faithfulness eval tests whether the model&apos;s answer agrees with the source material it was given. The summary passed this test: the warranty clauses it described were accurate. The citation hallucination was not detected by a faithfulness eval because the eval doesn&apos;t measure &quot;did the model introduce an entity that wasn&apos;t in the source&quot;.&lt;/p&gt;
&lt;p&gt;This is the biggest gap most PMs have. They focus on whether the output is good, not on whether the eval is measuring what matters. They measure happy paths and miss the failure modes. In this case, the eval suite detected accuracy, but it didn&apos;t have a slice that tested &quot;citation introduced outside source material&quot;. That&apos;s a rubric design failure.&lt;/p&gt;
&lt;p&gt;Layer 2 is also where you track regression. Did the model work last month and break this week? If so, the provider silently updated the checkpoint, and that&apos;s a governance conversation with your customer, not just a bug fix.&lt;/p&gt;
&lt;h2&gt;Layer 3: Product Architecture and Design Patterns&lt;/h2&gt;
&lt;p&gt;Layer 3 is how the actual product is built. It covers RAG pipelines (retrieval, re-ranking, prompt assembly), agents and tool use, the distinction between model-layer problems (the provider&apos;s job) and app-layer problems (your job), trade-offs between latency, cost, and quality, and guardrails and output validation.&lt;/p&gt;
&lt;p&gt;This is where you learn that a prompt rule isn&apos;t an enforcement mechanism. The system prompt said &quot;do NOT cite cases&quot;. That&apos;s Layer 3 work. But rules are suggestions to models. The model followed the suggestion most of the time, but on Friday, it decided to cite a case anyway. A rule without enforcement is a hope, not an architecture.&lt;/p&gt;
&lt;p&gt;The fix is a guardrail: a post-generation step that detects citation patterns and verifies each one against a legal citator before returning the response. That&apos;s enforcement. No citation, no output.&lt;/p&gt;
&lt;p&gt;Layer 3 also teaches you how problems move through the pipeline. Retrieval gives you the right chunks? Good: the problem isn&apos;t in the retrieval logic. The chunks are promoted correctly by re-ranking? Move on. The grounding instruction is in the prompt? Check. But is it at the top of the prompt (high attention) or buried in the middle (low attention)? That&apos;s architecture and it matters.&lt;/p&gt;
&lt;h2&gt;Layer 4: Safety, Ethics, and Governance&lt;/h2&gt;
&lt;p&gt;Layer 4 is what you tell the customer&apos;s general counsel, the regulator, and your board.&lt;/p&gt;
&lt;p&gt;This layer covers alignment and constitutional AI (how models are made safe), failure mode classification (is this jailbreaking, prompt injection, sycophancy, or distributional shift?), bias and fairness, the regulatory landscape (EU AI Act, UK GDPR, US fragmentation), the difference between AI safety failures (a model making a mistake) and AI security failures (an attacker exploiting a model), and incident response.&lt;/p&gt;
&lt;p&gt;In this scenario, the Layer 4 conversation is: &quot;We&apos;ve confirmed the issue. The model generated a citation not present in your document. This is a known failure mode. We mitigate it through three defences: a prompt rule, a post-generation verifier, and a regression eval suite. The verifier wasn&apos;t yet enabled for this feature. We&apos;re enabling it this week and pausing the feature in the meantime. You&apos;ll have a full post-mortem in five working days.&quot;&lt;/p&gt;
&lt;p&gt;That&apos;s not &quot;we have a hallucination problem&quot;. It&apos;s not &quot;we&apos;re so sorry&quot;. It&apos;s: &quot;Here&apos;s where it failed, here&apos;s how you can check we&apos;ve fixed it, here&apos;s our timeline, and here&apos;s the evidence that we think about these failures systematically.&quot;&lt;/p&gt;
&lt;p&gt;Under the EU AI Act, a contract-review tool used by qualified lawyers sits close to the &quot;high-risk&quot; line. The regulator doesn&apos;t expect perfection. It expects &lt;em&gt;documented mitigation&lt;/em&gt;. A governance framework that shows you&apos;ve thought about failure modes, tested for them, and have a response plan. The Layer 4 conversation is what saves the relationship.&lt;/p&gt;
&lt;h2&gt;Why It&apos;s All Four Layers at Once&lt;/h2&gt;
&lt;p&gt;Here&apos;s the critical insight, traced through the incident:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;The Fabricated Citation: One Bug, Four Layers

Layer 1  │ Next-token prediction produced a statistically
(Model)  │ plausible case citation from training priors.
         │ Grounding instruction lost to stronger prior.
         │
    ─────┼──────────────────────────────────────────────
         │
Layer 2  │ Eval suite tested faithfulness but NOT
(Eval)   │ &quot;introduced entities.&quot; Rubric gap.
         │
    ─────┼──────────────────────────────────────────────
         │
Layer 3  │ Prompt rule said &quot;do NOT cite cases.&quot;
(Arch)   │ No enforcement guardrail behind it.
         │
    ─────┼──────────────────────────────────────────────
         │
Layer 4  │ Client&apos;s GC needs a credible incident response,
(Gov)    │ not &quot;it&apos;s a known issue with LLMs.&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Real production problems are cross-layer. You can&apos;t diagnose this incident with Layer 1 knowledge alone. Yes, the model produced a confabulation, but so what? Why did it get to the customer? Because Layer 2 missed it. Why did Layer 2 miss it? Because the eval rubric had a gap. Why didn&apos;t the gap matter less? Because Layer 3 had no enforcement layer. Why did the customer not drop you? Because Layer 4 had a governance response ready.&lt;/p&gt;
&lt;p&gt;No single layer owns the fix. No single layer owns the blame.&lt;/p&gt;
&lt;p&gt;The PM&apos;s job is to move between layers fast enough that by the time you&apos;re in the room with the customer, you&apos;ve already diagnosed which layer each fix belongs to, what the timeline is for each fix, and which fix gets shipped first.&lt;/p&gt;
&lt;p&gt;Consider a few other escalations:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&quot;The AI is slower than it was last week.&quot;&lt;/strong&gt; Is the context window longer (Layer 1 attention cost)? Is there a regression that shows when the slowdown started (Layer 2)? Did the provider update the model (Layer 3 model-layer vs app-layer)? If it&apos;s a silent update, check the customer contract for a notice requirement (Layer 4 governance).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&quot;The eval suite passes but users say quality dropped.&quot;&lt;/strong&gt; Your eval has a gap (Layer 2). Interview users, extract the failure class, build a new test slice. Did the provider silently update the checkpoint (Layer 1)? Did any app-layer code change: a new prompt, a re-ranker threshold, a new guardrail (Layer 3)?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&quot;We&apos;re being attacked about hallucination rates on a public benchmark.&quot;&lt;/strong&gt; This is Layer 2 entirely: benchmark literacy. What dataset? What prompt? Are their numbers even comparable to yours? And Layer 4: the marketing response matters. Publish your own hallucination rate, defined precisely, on a held-out domain-specific benchmark.&lt;/p&gt;
&lt;p&gt;Every one of these incidents touches multiple layers. If your trace doesn&apos;t, you&apos;re missing something.&lt;/p&gt;
&lt;h2&gt;Fluency as Movement&lt;/h2&gt;
&lt;p&gt;The differentiator isn&apos;t knowing everything. It&apos;s knowing what you know, knowing what you don&apos;t, and being honest about the boundary. It&apos;s being able to say: &quot;I&apos;d need to pull the eval logs to give you a precise number, but here&apos;s how I&apos;d think about whether precision is even the right metric for this use case.&quot;&lt;/p&gt;
&lt;p&gt;It&apos;s moving from Layer 1 to Layer 2 to Layer 3 to Layer 4 and back again, fast enough that you can diagnose a problem before it becomes a customer crisis, fix it before your user has to call their lawyer, and communicate the fix in terms that matter to the people who depend on your product.&lt;/p&gt;
&lt;p&gt;This is the fluency the series will build. It&apos;s what I set out to learn when I started filling that Obsidian vault, and it&apos;s what I want to make available to every PM who&apos;s feeling the same gap I felt. Each subsequent article goes deep on one layer, plus a dedicated piece on context engineering, the cross-cutting discipline that determines what information reaches the model in the first place. This is the frame you&apos;ll use to connect them.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Test Your Understanding&lt;/h2&gt;
&lt;p&gt;Before moving to the deeper layers, check yourself on these questions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Layer 1 vs Layer 2&lt;/strong&gt;: A model produces a grammatically perfect answer that&apos;s factually wrong. Is this a Layer 1 problem, a Layer 2 problem, or both? What would you measure to tell the difference?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Layer 3 diagnosis&lt;/strong&gt;: You have a system prompt that says &quot;cite only sources from the provided context&quot;. The model cites external sources anyway. Is this a prompt-writing problem or an architectural problem? What would you add to fix it?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cross-layer incident&lt;/strong&gt;: A feature works perfectly in the lab, passes all evals, and then fails on a customer&apos;s data. Which layers would you check first and why?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Layer 4 communication&lt;/strong&gt;: A customer asks, &quot;Is this a bug or is this what LLMs just do?&quot; What&apos;s the difference you&apos;re communicating in your answer, and which layers justify your answer?&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;strong&gt;The series ahead:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Layer 1: How Models Work&lt;/strong&gt; — tokens, context windows, attention, and the mechanics that explain why models behave the way they do&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Layer 2: The Eval Gap&lt;/strong&gt; — why evaluation is the biggest gap most PMs have, and how to close it&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Layer 3: Model-Layer vs App-Layer&lt;/strong&gt; — the diagnostic question that determines who owns the fix&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Context Engineering&lt;/strong&gt; — the cross-layer discipline of controlling what information reaches the model&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Layer 4: Safety and Governance&lt;/strong&gt; — the conversations PMs keep dodging, and why they determine whether customers stay&lt;/li&gt;
&lt;/ul&gt;
</content:encoded><category>AI</category><category>Product Management</category><category>AI Fluency Series</category><category>Technical Literacy</category><author>Steve James</author></item><item><title>EU AI Act Compliance Is Already Breaking Startups: What PMs Need to Know</title><link>https://stvpj.com/blog/eu-ai-act-compliance-breaking-startups/</link><guid isPermaLink="true">https://stvpj.com/blog/eu-ai-act-compliance-breaking-startups/</guid><description>The EU AI Act is not a future concern. It is actively reshaping products today. But treating compliance as a burden misses the point. The best Product Managers will treat it the same way they treat any product constraint: as a forcing function for better decisions.</description><pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I am bullish on AI. I have been for a while, and nothing I have seen in the last twelve months has changed that view. The pace of capability improvement is staggering, the application space is enormous, and the economic incentives are pulling in one direction: more AI, faster, everywhere.&lt;/p&gt;
&lt;p&gt;But I have also spent enough years in product development to know that &quot;move fast&quot; without &quot;think carefully&quot; is how you end up with a mess that takes twice as long to clean up as it took to create. And right now, the EU AI Act is forcing a conversation that the industry has been avoiding.&lt;/p&gt;
&lt;p&gt;It is not a future concern. It is already here.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Reality on the Ground&lt;/h2&gt;
&lt;p&gt;The EU AI Act is not coming. It is already here, and the enforcement milestones are arriving fast:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Aug 2024 ─── Act entered into force
    │
Feb 2025 ─── Prohibitions on unacceptable AI practices took effect
    │
Aug 2025 ─── General Purpose AI model requirements took effect
    │
Aug 2026 ─── ⚠ HIGH-RISK AI SYSTEM REQUIREMENTS TAKE FULL EFFECT
    │
Aug 2027 ─── Full enforcement begins
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That critical August 2026 deadline is four months away.&lt;/p&gt;
&lt;p&gt;The penalties are not theoretical. Violations of prohibited practices carry fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher. High-risk violations reach EUR 15 million or 3%. Even providing incorrect information to regulators can cost EUR 7.5 million.&lt;/p&gt;
&lt;p&gt;For a seed-stage startup, even the &quot;proportionate caps&quot; designed to protect SMEs can be existential. As one compliance source put it bluntly: a EUR 140,000 fine for a seed-stage company is a death sentence.&lt;/p&gt;
&lt;p&gt;Founders on Reddit are already discussing this. One described having to &quot;rewrite a big part of my AI-powered chatbot to meet the new regulations.&quot; Another flagged the health AI classification problem: &quot;the line between &apos;health advice&apos; and &apos;medical device&apos; is blurry and the EU is not messing around.&quot; A third described the moment their first EU customer asked about AI Act compliance and they had nothing prepared.&lt;/p&gt;
&lt;p&gt;This is not hypothetical risk. It is operational reality.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Two Extremes&lt;/h2&gt;
&lt;p&gt;There are broadly two perspectives in this conversation, and both have a point.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The US &quot;Wild West&quot; argument&lt;/strong&gt; holds that America&apos;s relatively light-touch approach to technology regulation has been a primary engine of its dominance. Apple, Google, Amazon, Microsoft, Meta: the most valuable and influential technology companies on the planet were all built in an environment where founders could experiment, ship, iterate, and scale without navigating a compliance labyrinth first. For fifty years, US tech companies have led the world in innovation, and that track record is difficult to argue with. Whether the US still leads in every frontier, particularly AI where the debate about China&apos;s position is genuinely open, is a fair question. But the broader pattern is clear: permissive regulatory environments have historically correlated with explosive technological growth.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The EU &quot;precautionary&quot; argument&lt;/strong&gt; holds that unchecked AI development creates real harm: discriminatory hiring algorithms, opaque credit scoring, surveillance creep, and safety risks in critical infrastructure. The AI Act is an attempt to draw lines before the damage is done, not after. Proponents argue that without guardrails, we are sleepwalking into a world where AI systems make consequential decisions about people&apos;s lives with no transparency, no accountability, and no recourse.&lt;/p&gt;
&lt;p&gt;Both positions contain truth. And both, taken to their logical extreme, lead to bad outcomes.&lt;/p&gt;
&lt;p&gt;The unregulated approach creates enormous value quickly, but it also creates enormous risk. When an AI system wrongly denies someone a mortgage, or a medical chatbot gives dangerous advice, or a hiring tool systematically discriminates, the harm is real and often falls on people who have no visibility into the system that affected them. &quot;Move fast and break things&quot; might be acceptable when you are iterating on a landing page. It is a dangerous philosophy when the &quot;things&quot; being broken are people&apos;s livelihoods, health, or civil rights.&lt;/p&gt;
&lt;p&gt;The heavily regulated approach protects citizens, but it can also strangle the innovation it claims to want. European startups have been vocal about this. Some welcomed the clarification the Act provides, but others argued the original timeline &quot;favoured deep-pocketed American tech giants who could afford to hire armies of compliance lawyers.&quot; When compliance costs create a structural advantage for incumbents, regulation stops being a shield and starts being a moat, just one that protects the wrong people.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;A Product Manager&apos;s Perspective&lt;/h2&gt;
&lt;p&gt;Here is where I think the conversation goes wrong. Both sides frame regulation as something that happens &lt;em&gt;to&lt;/em&gt; product development, an external force that either helps or hinders. But if you have spent any time building products in complex environments, you know that constraints are not inherently good or bad. What matters is how you design around them.&lt;/p&gt;
&lt;p&gt;This is exactly the same principle I wrote about in the context of &lt;a href=&quot;https://stvpj.com/blog/agile-governance&quot;&gt;agile governance&lt;/a&gt;. Governance is not the enemy of delivery. Bad governance is the enemy of delivery. Good governance supports the flow of value by creating guardrails that keep teams on track without dictating every step.&lt;/p&gt;
&lt;p&gt;The EU AI Act, at its core, is asking for things that good product teams should want anyway:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Transparency&lt;/strong&gt;: Tell users they are interacting with AI. Explain how decisions are made. This is not a burden; it is basic product integrity.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Explainability&lt;/strong&gt;: If your AI system makes a consequential decision, you should be able to explain why. If you cannot, that is not a regulation problem. That is a product quality problem.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Human oversight&lt;/strong&gt;: For high-risk decisions, keep a human in the loop. Again, this is not radical. It is the kind of thing experienced PMs already advocate for.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Logging and monitoring&lt;/strong&gt;: Maintain records of how your system behaves. This is just good engineering practice dressed up in legal language.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Risk assessment&lt;/strong&gt;: Before you build, think about what could go wrong and for whom. This is discovery. This is what we are supposed to do.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The issue is not that these requirements exist. The issue is that they are being imposed on products that were never designed with them in mind. Retrofitting explainability into a system that was built as a black box is expensive and painful. Building it in from day one is a design decision, not a compliance project.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Forcing Function&lt;/h2&gt;
&lt;p&gt;The tension between regulation and innovation has been a recurring theme on &lt;a href=&quot;https://www.lennysnewsletter.com/&quot;&gt;Lenny&apos;s Podcast&lt;/a&gt; recently, and several conversations have landed on a point that I think is underappreciated: constraints do not just limit what you can build. They change how you think about what you should build.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/why-chatgpt-will-be-the-next-big-growth-channel-brian-balfour&quot;&gt;Brian Balfour&lt;/a&gt; described companies that define &quot;incredibly hard constraints&quot; as a strategy, not a problem. One company he worked with benchmarked against competitors and set a constraint that each function would be one-fifth the size. That constraint didn&apos;t slow them down. It forced them to find fundamentally different ways of working, including aggressive adoption of AI tooling.&lt;/p&gt;
&lt;p&gt;Regulatory constraints can work the same way, if you let them.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/anthropics-1b-to-19b-growth-run&quot;&gt;Amol Avasare from Anthropic&lt;/a&gt; put it even more directly: &quot;As the risks get higher and the stakes get higher, I think the fact that we are taking a stance and safety is critical to what we do, is actually going to become a significant competitive advantage.&quot;&lt;/p&gt;
&lt;p&gt;This is not compliance-as-cost. This is compliance-as-differentiation. In a market where trust is increasingly scarce, the company that can say &quot;we built this responsibly, and here is the evidence&quot; has an advantage over the one scrambling to bolt on compliance features before a deadline.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/reflections-on-a-movement-eric-ries&quot;&gt;Eric Ries&lt;/a&gt; made a related point about AI alignment: &quot;Everyone&apos;s talking about AI alignment. I&apos;d be a little more sanguine about AI alignment if the companies doing the aligning were better at aligning their human intelligences.&quot; The Act&apos;s explainability and transparency requirements essentially force companies to make their organisational values explicit. Ries argues this is work the tech industry has &quot;severely underinvested&quot; in. He is right.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Lightweight Governance, Not Bureaucratic Theatre&lt;/h2&gt;
&lt;p&gt;My position is not that the EU AI Act is perfect. It is not. The classification system is ambiguous in places. The &quot;wellness tool vs medical device&quot; boundary is genuinely unclear. The phased timeline has created perverse incentives, with some companies rushing to launch high-risk products before enforcement kicks in. And the compliance burden falls disproportionately on startups who can least afford it.&lt;/p&gt;
&lt;p&gt;But the answer to imperfect regulation is not no regulation. It is better regulation. And the principles we apply in product development point the way.&lt;/p&gt;
&lt;p&gt;In product work, we know that heavyweight governance kills velocity. Stage gates, committee approvals, and thick compliance documents slow teams to a crawl. But we also know that zero governance is chaos. Without any constraints, teams build the wrong things, accumulate unmanageable risk, and lose alignment with the broader organisation.&lt;/p&gt;
&lt;p&gt;The sweet spot is lightweight governance: clear guardrails, empowered teams, and fast feedback loops. You define the boundaries, then give teams freedom within them. You inspect and adapt. You treat the governance model itself as a product that evolves.&lt;/p&gt;
&lt;p&gt;AI regulation should work the same way:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Classification should be clear and predictable.&lt;/strong&gt; Founders should not be guessing whether their wellness app counts as a medical device. The boundaries need to be sharp enough that a product team can make a confident call during discovery, not after months of legal review.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Compliance should be proportionate.&lt;/strong&gt; The requirements for a chatbot recommending restaurants should not be the same as for a system making parole decisions. Risk-based tiering is the right idea, but the tiers need to be practical, not just theoretical.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The cost should not be a moat.&lt;/strong&gt; If compliance is so expensive that only large incumbents can afford it, the regulation is failing. Tooling, templates, and shared infrastructure for common compliance patterns would help enormously.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The focus should be on outcomes, not paperwork.&lt;/strong&gt; Does the system behave safely? Can affected users understand and challenge decisions? Is there meaningful human oversight where it matters? These are the questions that matter, not whether a specific template was filled in.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;What This Means for PMs&lt;/h2&gt;
&lt;p&gt;If you are a Product Manager working on anything that touches AI, this is your problem now. Not legal&apos;s problem. Not compliance&apos;s problem. Yours.&lt;/p&gt;
&lt;p&gt;The best PMs I know have always understood that constraints are not obstacles; they are design inputs. A screen size constraint forces better information hierarchy. A performance budget forces cleaner architecture. A regulatory requirement forces you to think about who your product affects and how.&lt;/p&gt;
&lt;p&gt;Here is what I would do right now:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Assess your risk classification early.&lt;/strong&gt; During discovery, not after launch. If your product touches hiring, credit, education, healthcare, law enforcement, or critical infrastructure, assume you are high-risk until proven otherwise. Build that assumption into your PRDs from the start.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Design for explainability from day one.&lt;/strong&gt; If you cannot explain why your system made a decision, you have a product quality problem regardless of what the EU thinks. Explainability is not just a compliance feature. It is a trust feature, and trust drives retention.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Own compliance as a product concern.&lt;/strong&gt; As Ian McAllister said on Lenny&apos;s Podcast: &quot;The more you grow, you have to increasingly find the constraints or barriers to your success and knock them down no matter what they are.&quot; Do not wait for legal to hand you a checklist. Understand the requirements yourself and factor them into your roadmap.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Watch the US convergence.&lt;/strong&gt; Colorado&apos;s SB 24-205 introduces risk management policies, algorithmic impact assessments, and consumer notice mechanisms. This is not an EU-only trend. The direction of travel is clear globally, and PMs who treat EU compliance as an isolated European problem will find themselves retrofitting again when similar requirements land closer to home.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Bottom Line&lt;/h2&gt;
&lt;p&gt;I remain deeply optimistic about AI. The technology is transformative, and the potential to improve lives at scale is real. But potential and impact are not the same thing. The gap between them is filled with product decisions, and those decisions need guardrails.&lt;/p&gt;
&lt;p&gt;The EU AI Act is imperfect, but its core instinct is right: consequential AI systems should be transparent, explainable, and accountable. These are not anti-innovation principles. They are pro-trust principles. And in the long run, trust is the foundation that sustainable innovation is built on.&lt;/p&gt;
&lt;p&gt;The PMs who treat compliance as a checkbox will resent it. The PMs who treat it as a product constraint, one that forces clearer thinking, better architecture, and stronger user trust, will build better products because of it.&lt;/p&gt;
&lt;p&gt;We should not be choosing between the American model of unchecked speed and the European model of cautious restraint. We should be applying the same principles we use in product development: encourage experimentation, accept that risk is inherent, but apply lightweight governance to guide the flow of value without blocking it.&lt;/p&gt;
&lt;p&gt;That is not a regulatory philosophy. That is just good product management.&lt;/p&gt;
</content:encoded><category>AI</category><category>Product Management</category><category>Governance</category><category>Compliance</category><category>Strategy</category><author>Steve James</author></item><item><title>Agentic Engineering vs Vibe Coding: A Product Manager&apos;s Guide to Knowing the Difference</title><link>https://stvpj.com/blog/agentic-engineering-vs-vibe-coding/</link><guid isPermaLink="true">https://stvpj.com/blog/agentic-engineering-vs-vibe-coding/</guid><description>Agentic engineering is about to give Product Managers superpowers we never had before. Vibe coding has its place too, but only if we are honest about what it is and what it is not.</description><pubDate>Fri, 27 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;The distinction that matters&lt;/h2&gt;
&lt;p&gt;Scroll through any PM community right now and you will find two very different conversations happening under the same banner of &quot;AI-assisted development.&quot; In one, experienced practitioners are describing how AI agents are amplifying their existing expertise, letting them move faster, think bigger, and validate ideas in hours rather than weeks. In the other, people are asking whether Product Managers now need to ship production code because, after all, an LLM can write it for you.&lt;/p&gt;
&lt;p&gt;These are not the same conversation. And conflating them is where people get into trouble.&lt;/p&gt;
&lt;p&gt;Simon Willison&apos;s &lt;a href=&quot;https://simonwillison.net/guides/agentic-engineering-patterns/&quot;&gt;Agentic Engineering Patterns&lt;/a&gt; guide draws a sharp line between two modes of working with AI. &lt;strong&gt;Agentic engineering&lt;/strong&gt; is what happens when domain experts use AI agents to amplify skills they already possess. &lt;strong&gt;Vibe coding&lt;/strong&gt; is what happens when someone without deep understanding of the output delegates entirely to the model and hopes for the best.&lt;/p&gt;
&lt;p&gt;For Product Managers, this distinction is everything. Agentic engineering is, by far, the area we should be embracing. It is going to 10X our productivity and unlock superpowers we never had before. Vibe coding has a place in our toolkit too, but we need to be very clear-eyed about what that place is.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why agentic engineering is our superpower&lt;/h2&gt;
&lt;p&gt;The core pattern of agentic engineering, as Willison describes it, is delegation and supervision. Instead of treating an LLM as a glorified autocomplete, you define goals, constraints, quality criteria, and workflows for the AI agent. You review and refine its output. You bring your expertise to bear on what the agent produces.&lt;/p&gt;
&lt;p&gt;This should sound familiar. It is, fundamentally, what Product Managers do. We have spent our entire careers defining problems clearly, setting constraints, evaluating outcomes, and making trade-off decisions. We orchestrate. We prioritise. We apply judgement to ambiguous situations. Agentic engineering takes these existing strengths and amplifies them to a degree that was simply not possible before.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/getting-paid-to-vibe-code&quot;&gt;Lazar Jovanovic&lt;/a&gt;, who works as a professional vibe coder at Lovable and was featured on Lenny&apos;s podcast, captures the underlying principle perfectly:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;AI is an amplifier regardless of your background. If you do not know what you are doing, you are just going to produce garbage faster.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The corollary is equally true. If you &lt;em&gt;do&lt;/em&gt; know what you are doing, you produce excellence faster. For PMs with strong product sense, clear thinking, and good judgement, agentic AI is a force multiplier unlike anything we have had before.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What 10X actually looks like&lt;/h2&gt;
&lt;p&gt;Let me be concrete about what this means in practice, because &quot;10X productivity&quot; can sound like empty hype if you do not ground it.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Research and synthesis at speed.&lt;/strong&gt; Competitive analysis that used to take a week of desk research can be synthesised in an afternoon. User research themes can be clustered, cross-referenced against quantitative data, and tested for patterns in hours. One PM described an AI agent that saves hours of manual checking each week by surfacing competitive moves automatically. This is not replacing the PM&apos;s judgement about what matters. It is eliminating the drudgery that sits between a question and an informed answer.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Faster feedback loops.&lt;/strong&gt; &lt;a href=&quot;https://www.lennysnewsletter.com/p/the-new-ai-growth-playbook-for-2026-elena-verna&quot;&gt;Elena Verna&lt;/a&gt; described on Lenny&apos;s podcast how her team at Amplitude went from the traditional multi-month cycle of user research to design sprints to engineering roadmap prioritisation, down to prototyping in a couple of weeks. The compression is not about cutting corners. It is about removing the wait time between having an idea and being able to test it against reality.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Richer stakeholder conversations.&lt;/strong&gt; When you can spin up an interactive prototype to stress-test a hypothesis before you have even written the first user story, the quality of your conversations with engineering, design, and leadership changes fundamentally. You are no longer describing an abstract concept in a PRD. You are showing people a working example of what those requirements describe.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Judgement as the bottleneck, not bandwidth.&lt;/strong&gt; &lt;a href=&quot;https://www.lennysnewsletter.com/p/microsoft-cpo-on-ai&quot;&gt;Aparna Chennapragada&lt;/a&gt;, formerly a senior PM leader at Google, made the point on Lenny&apos;s podcast that the taste-making and the editing function becomes really important in an AI-augmented world. If your role was mostly process management and report generation, you should be concerned. But if your value lies in judgement, prioritisation, and the ability to frame problems clearly, you are about to become significantly more powerful. Agentic engineering shifts the constraint from &quot;I do not have enough time to do the analysis&quot; to &quot;I need to make better decisions with the analysis I now have.&quot;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;The expertise amplifier effect&lt;/h2&gt;
&lt;p&gt;There is a pattern here that I think is worth naming explicitly. Every capability that agentic engineering unlocks for PMs depends on the expertise you bring to the table. The AI agent does not know which competitive signal matters. You do. The agent does not know which user research theme connects to your strategic priorities. You do. The agent does not know whether the prototype it just built actually solves the customer&apos;s problem. You do.&lt;/p&gt;
&lt;p&gt;This is what Willison means when he talks about &quot;hoarding things you know how to do.&quot; The more patterns, frameworks, and hard-won experience you have accumulated over your career, the more effectively you can direct AI agents.&lt;/p&gt;
&lt;p&gt;Your expertise is not threatened by agentic engineering. It is the prerequisite for it.&lt;/p&gt;
&lt;p&gt;Jovanovic made the same point using the &lt;a href=&quot;https://www.lennysnewsletter.com/p/getting-paid-to-vibe-code&quot;&gt;Aladdin and the Genie analogy&lt;/a&gt;. You rub the lamp, the genie comes out, and your first wish is to be taller. The genie makes you 13 feet tall because you were not specific enough. The quality of what you get from an AI agent is directly proportional to the clarity and precision of your instructions. And clarity and precision in describing what needs to be built, for whom, and why, is quite literally the core PM skill.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://thenewstack.io/vibe-coding-agentic-engineering/&quot;&gt;David Mytton from Arcjet&lt;/a&gt; articulated the boundary well: AI coding only works when there are clear guardrails in place, meaning good documentation and comprehensive tests the agent can run. That, he says, is what differentiates vibe coding from agentic engineering. The guardrails require expertise to define. PMs who have spent years learning to write clear requirements, define acceptance criteria, and think through edge cases are already building the muscle that agentic engineering demands.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The vibe coding caveat&lt;/h2&gt;
&lt;p&gt;All of which brings me to vibe coding, and why it deserves a more measured take than it usually gets.&lt;/p&gt;
&lt;p&gt;I am bullish on Product Managers using vibe coding. But specifically for what it is: a way to supercharge the things we are already doing. When you vibe code a prototype, &lt;strong&gt;you are not becoming a software engineer&lt;/strong&gt;. You are doing what PMs have always done, articulating requirements and demonstrating a potential solution, except that now the output is a working interactive example rather than a static wireframe or a written specification. It is analogous to writing a PRD, except you are showing those requirements in a living, testable form.&lt;/p&gt;
&lt;p&gt;I have experienced this firsthand. I built a story mapping tool using Claude Code because every commercially available option was either wildly over-engineered and expensive, or there were templates in Miro and Lucid that ended up being more work than they were worth. I built this thing and I could not be happier with it for my own tasks. It does exactly what I need.&lt;/p&gt;
&lt;p&gt;But the moment you start thinking about releasing something like that beyond your own use, you need to be honest about what you do not know. When you vibe code something, you have no real understanding of how it works under the surface. You can see the inputs and the outputs. You can test the happy path and a few edge cases. But you have no idea what shortcuts the model took, what security considerations were ignored, what performance problems are lurking, or what happens when the thing encounters a scenario that neither you nor the model anticipated.&lt;/p&gt;
&lt;p&gt;There are sensible mitigations. Get a different LLM to conduct a code review. Ask the model to explain each part of the codebase so you have a basic understanding of how it is put together. &lt;a href=&quot;https://www.lennysnewsletter.com/p/the-foundation-sprint-jake-knapp-and-john-zeratsky&quot;&gt;Jake Knapp and John Zeratsky&lt;/a&gt;, updating their Design Sprint methodology for the AI era, noted on Lenny&apos;s podcast that teams who jumped to vibe coding prototypes too quickly produced output that was &quot;super generic&quot; and did not really describe what the product was. Their advice: you will move faster if you slow down a little at the beginning. That same discipline applies here.&lt;/p&gt;
&lt;p&gt;But the fundamental limitation remains. There will always be something you did not know to ask about. Product Managers have been working with engineers for decades, so we know the kinds of things to look out for at a high level. We know about technical debt, scalability, security reviews, testing. But unless you have actually written production code, maintained it, debugged it at 2am, and dealt with the consequences of architectural decisions made years ago, you are not going to catch everything. And in software, what you miss can range from mildly annoying to catastrophically expensive.&lt;/p&gt;
&lt;p&gt;The rule of thumb is simple. Vibe code prototypes enthusiastically. Use them to stress-test ideas, to have better conversations with your engineering teams, to show stakeholders what a solution could look and feel like. And when it is time for that prototype to become production-ready, bring in the engineers with decades of experience to make it work properly. The prototype proves the concept. The engineering team makes it real.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What the full picture could look like&lt;/h2&gt;
&lt;p&gt;I heard someone describe a workflow recently that I think paints a compelling picture of where this is all heading. Their PM team still conducts user research the way they always have. They send out surveys, capture usage metrics, and go out and speak to customers. All of that is captured, transcribed, and ingested into a data store. Agents then go over this data continuously, looking for trending themes, recurring problems, and emerging opportunities. These are automatically added to a dynamically constructed opportunity solution tree.&lt;/p&gt;
&lt;p&gt;The PMs monitor this tree, and when something looks to have enough weight behind it, they build a prototype of the potential solution using a tool like Claude Code. That prototype is then dogfooded by the organisation for a few weeks to stress-test both the idea and the proposed solution. If it looks like it has legs, the prototype is handed to the engineering teams, who pull it apart and build a production-ready version using the correct guardrails and technologies to ensure it is performant, secure, and scalable.&lt;/p&gt;
&lt;p&gt;What strikes me about this example is that it is not science fiction. Every piece of this workflow exists today. Agentic engineering handles the research synthesis and opportunity identification, amplifying the PM&apos;s expertise rather than replacing it. Vibe coding handles the prototype, giving the team something real to react to rather than a static specification. And professional engineering handles production, because that is where the decades of hard-won expertise in building reliable, secure software actually matters. Each mode of working with AI is used for exactly what it is good at, and nothing more.&lt;/p&gt;
&lt;p&gt;That, to me, is the future that PM teams should be aspiring towards.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Where this leaves us&lt;/h2&gt;
&lt;p&gt;The distinction between agentic engineering and vibe coding is not just semantic. It is the difference between a power tool in the hands of a craftsperson and a power tool in the hands of someone who watched a YouTube tutorial. Both can produce impressive results. Only one can be trusted when it matters.&lt;/p&gt;
&lt;p&gt;Product Managers have spent decades building the exact skills that agentic engineering rewards: clarity of thought, structured problem framing, judgement under ambiguity, and the ability to orchestrate people and processes towards an outcome. AI does not diminish any of that. It amplifies all of it. The PMs who recognise this, and who learn to wield these tools with the same discipline they bring to everything else, are going to be extraordinarily effective.&lt;/p&gt;
&lt;p&gt;Embrace agentic engineering fully. Use vibe coding enthusiastically for prototypes and personal tools. And when it is time to make something real, bring in the engineers. Not because AI has failed, but because knowing the limits of your expertise has always been the most product-management thing you can do.&lt;/p&gt;
</content:encoded><category>AI</category><category>Product Management</category><category>Agentic Engineering</category><category>Vibe Coding</category><category>Prototyping</category><category>PM Skills</category><author>Steve James</author></item><item><title>The Great PM Skills Debate: What AI Won&apos;t Replace</title><link>https://stvpj.com/blog/great-pm-skills-debate/</link><guid isPermaLink="true">https://stvpj.com/blog/great-pm-skills-debate/</guid><description>Everyone agrees AI is transforming product management. Nobody agrees on which skills survive. The debate over strategy, taste, and the &apos;editing function&apos; reveals a profession in the middle of an identity crisis.</description><pubDate>Sat, 14 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;There&apos;s a debate happening in product management right now that I find genuinely fascinating, not because it&apos;s new (people have been asking &quot;will AI replace PMs?&quot; for two years), but because it&apos;s finally getting specific. The question has shifted from &quot;will it happen?&quot; to &quot;which bits, exactly?&quot;&lt;/p&gt;
&lt;p&gt;The conversation is playing out across Podcasts, research, Medium think pieces, and every PM WhatsApp channel I&apos;m in. &lt;a href=&quot;https://medium.com/design-bootcamp/what-284-episodes-of-lennys-podcast-reveal-about-where-product-management-is-headed-99d46cacb7d3&quot;&gt;Zoë Yang&apos;s analysis of 284 Lenny&apos;s Podcast episodes&lt;/a&gt; captures the trajectory well: PMs are shifting from feature delivery to systems thinking, building evals, instrumenting failures, and operating in roles with increasingly blurred boundaries. What&apos;s emerged isn&apos;t a clean answer. It&apos;s a set of contradictions that tell us something important about where the profession is headed.&lt;/p&gt;
&lt;h2&gt;The one thing everyone agrees on&lt;/h2&gt;
&lt;p&gt;Let&apos;s start with the common ground, because there is some.&lt;/p&gt;
&lt;p&gt;The administrative layer of product management is being compressed. Meeting notes, backlog grooming, ticket management, roadmap formatting, basic prioritisation frameworks, these are already being automated or dramatically accelerated. Nobody seriously disputes this. All opinions seem to frame AI as enhancing PM capabilities rather than replacing them, but even these optimistic takes acknowledge that the coordination and documentation layer is being hollowed out.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/product-management-theater-marty&quot;&gt;Marty Cagan&lt;/a&gt; put it bluntly on Lenny&apos;s Podcast: if your job is fundamentally &quot;backlog administrator&quot;, that work is already being done by AI, and it&apos;s only going to get better supported. The question is what&apos;s left once you strip that layer away.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/microsoft-cpo-on-ai&quot;&gt;Aparna Chennapragada&lt;/a&gt; framed it well: if you&apos;re mostly a process person, tracking things, sending emails, managing the machinery, you&apos;ve got a real question to answer about your value add. But on the flip side, she argues, &quot;the taste-making and the editing function becomes really, really important.&quot;&lt;/p&gt;
&lt;p&gt;That word &quot;editing&quot; keeps coming up. And it&apos;s worth digging into.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Editor-in-Chief thesis&lt;/h2&gt;
&lt;p&gt;There&apos;s a compelling argument gaining traction that the PM role is shifting from &quot;builder&quot; to &quot;editor&quot;. &lt;a href=&quot;https://medium.com/design-bootcamp/taste-is-the-only-moat-surviving-the-ai-flood-0420ecc6ce03&quot;&gt;Eric M. De Castro&lt;/a&gt; captured it directly: the bots can manage the backlog, the agents can optimise the velocity, and the role of the Senior PM has fundamentally shifted from &quot;Builder&quot; to &quot;Editor-in-Chief&quot;.&lt;/p&gt;
&lt;p&gt;The logic goes like this. When AI can generate strategies, write PRDs, draft user stories, and even prototype features, the PM&apos;s job is no longer to produce these artefacts. It&apos;s to curate, refine, and judge them. You&apos;re not writing the first draft any more. You&apos;re deciding which of five AI-generated drafts is actually good, and why.&lt;/p&gt;
&lt;p&gt;A similar point surfaces in &lt;a href=&quot;https://blog.mj-kang.com/taste-in-the-age-of-ai/&quot;&gt;a piece on taste in the AI age&lt;/a&gt;: we&apos;re entering an era where anyone can make a lot, so the differentiator isn&apos;t how much you can produce, it&apos;s how much you can discard. Building is becoming abundant. Editing is becoming the craft.&lt;/p&gt;
&lt;p&gt;This shouldn&apos;t feel entirely alien to experienced PMs. The best Product Managers were always defined less by the ideas they greenlit and more by the ones they killed. Saying no to the majority of what comes down the funnel, the feature requests, the stakeholder pet projects, the shiny distractions, has always been the job. What&apos;s changed is the volume. When AI can generate ten plausible strategies before lunch, the filtering muscle doesn&apos;t become less important. It becomes the whole game. Same skill, but on steroids.&lt;/p&gt;
&lt;p&gt;The question of whether machines can even develop taste is &lt;a href=&quot;https://webdesignerdepot.com/ai-as-art-director-can-machines-develop-taste/&quot;&gt;explored well by Web Designer Depot&lt;/a&gt;, who argue that while AI can learn aesthetic patterns, true taste involves cultural context, intentional rule-breaking, and emotional resonance that remain distinctly human.&lt;/p&gt;
&lt;p&gt;I find this persuasive up to a point. But it raises an uncomfortable question that nobody seems to have a good answer to yet: how do you develop editorial judgment if you never do the building? Taste requires reps. If AI does the work, where do the reps come from?&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://medium.com/design-bootcamp/taste-will-save-us-and-other-lies-we-tell-ourselves-de3ba3ba9509&quot;&gt;Hilde Dybdahl Johannessen&lt;/a&gt; pushed back on the &quot;taste as moat&quot; narrative directly, arguing that taste itself can become a form of gatekeeping and that AI may eventually learn to simulate it through preference learning. It&apos;s a useful corrective. The taste argument is appealing, but it&apos;s not as airtight as its proponents suggest.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The strategy paradox&lt;/h2&gt;
&lt;p&gt;Here&apos;s where it gets really interesting, and where the PM community is genuinely split.&lt;/p&gt;
&lt;p&gt;On one side, you have people arguing that strategy is the skill most vulnerable to AI replacement. The reasoning: AI can process vastly more market data, competitive intelligence, and customer feedback than any human. Strategic frameworks are well-documented and replicable. Pattern recognition from thousands of successful strategies is exactly what AI excels at.&lt;/p&gt;
&lt;p&gt;On the other side, you have people arguing that strategy is the skill most protected from AI. The reasoning: strategy requires contrarian thinking, and AI is trained on consensus data. It demands human judgment about what to ignore. It involves organisational politics and relationship dynamics that AI cannot navigate. As &lt;a href=&quot;https://medium.com/design-bootcamp/taste-will-save-us-and-other-lies-we-tell-ourselves-de3ba3ba9509&quot;&gt;Hilde Dybdahl Johannessen points out&lt;/a&gt;, without deliberate steering, AI will give your company roughly the same advice as your competitors. The market doesn&apos;t reward copycats. It rewards contrarians who are right.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/how-80000-companies-build-with-ai-asha-sharma&quot;&gt;Asha Sharma&lt;/a&gt; and Lenny discussed this tension directly on the podcast. You&apos;d think that an AI with all the information about where the market is going, your metrics, and your product today would be excellent at developing strategy. And yet many people believe it&apos;s the one thing AI won&apos;t be good at for a long time, because that&apos;s where human judgment is most irreplaceable.&lt;/p&gt;
&lt;p&gt;I don&apos;t think either side has won this argument. But I think the framing is slightly wrong. Strategy isn&apos;t one thing. The analytical component of strategy, market sizing, competitive mapping, trend identification, is clearly vulnerable. The judgment component, what to bet on, what to ignore, when to zig while everyone zags, is clearly protected. The question for any individual PM is: which of those two things do you actually spend your time doing?&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The jagged frontier&lt;/h2&gt;
&lt;p&gt;Ethan Mollick&apos;s concept of the &quot;&lt;a href=&quot;https://bigthink.com/plus/ethan-mollicks-four-guiding-principles-for-using-ai-at-work/&quot;&gt;jagged frontier&lt;/a&gt;&quot; is the best mental model I&apos;ve found for thinking about all of this. The idea is that AI&apos;s capabilities aren&apos;t a clean line. They&apos;re uneven. AI excels at some surprisingly complex tasks while failing at some surprisingly simple ones. And the frontier keeps moving.&lt;/p&gt;
&lt;p&gt;The practical implication is that you can&apos;t make blanket statements about what AI can and can&apos;t do. You have to test it, task by task, and develop an instinct for where the frontier currently sits. &lt;a href=&quot;https://bigthink.com/plus/ethan-mollicks-four-guiding-principles-for-using-ai-at-work/&quot;&gt;Mollick&apos;s four principles&lt;/a&gt; are useful here:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;always invite AI to the table,&lt;/li&gt;
&lt;li&gt;be the human in the loop,&lt;/li&gt;
&lt;li&gt;treat AI like a person (but remember it isn&apos;t one), and&lt;/li&gt;
&lt;li&gt;assume this is the worst AI you&apos;ll ever use.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That last one matters. Whatever AI can&apos;t do today, it will probably do tomorrow. So building your career strategy around AI&apos;s current limitations is a losing game. The question isn&apos;t &quot;what can&apos;t AI do?&quot; It&apos;s &quot;what will remain uniquely human even as AI gets dramatically better?&quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The buy-in problem nobody talks about&lt;/h2&gt;
&lt;p&gt;Multiple podcast guests raised a point that I think is under-discussed: AI can&apos;t do stakeholder management.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/bending-the-universe-in-your-favor&quot;&gt;Claire Vo&lt;/a&gt; put it well when she noted that she doesn&apos;t know how an AI bot achieves buy-in and alignment, &quot;unless everybody&apos;s got their own little bot and they&apos;re all talking to each other.&quot;&lt;/p&gt;
&lt;p&gt;This sounds like a throwaway observation, but I think it&apos;s profound. A huge amount of product management is persuasion. Convincing an engineering lead to prioritise your feature. Getting a sceptical executive to fund an experiment. Navigating the politics of a cross-functional team where everyone has different incentives. Building trust with a design team that&apos;s been burned by PMs before.&lt;/p&gt;
&lt;p&gt;None of this is analytical. None of it can be synthesised from data. And it&apos;s not going away any time soon, because it&apos;s fundamentally about human relationships and organisational dynamics. &lt;a href=&quot;https://medium.com/@michelegalli.pm/ai-wont-replace-product-managers-it-will-expose-them-fcd5c1e44cfa&quot;&gt;Michele Galli&lt;/a&gt; made a related point: many PMs have built their competence around communication, organisation, and alignment. Those skills are useful, but they&apos;re also relatively safe. AI compresses the value of coordination, but not the value of genuine influence.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The blurring of roles&lt;/h2&gt;
&lt;p&gt;One of the most consistent themes across Product related discussions is that the boundaries between PM, engineer, and designer are dissolving.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/you-dont-need-to-be-a-well-run-company-to-win-tamar-yehoshua&quot;&gt;Tamar Yehoshua&lt;/a&gt; predicted that in five to ten years, these lines will blur significantly because AI will enable PMs to build prototypes and designers to code. &lt;a href=&quot;https://www.businessinsider.com/meta-pms-ai-builders-tech-industry-2026-2&quot;&gt;Meta PMs are already using AI coding tools&lt;/a&gt; to become builders themselves, with one PM describing it as being handed &quot;superpowers&quot;, operating less like a conductor moving work between functions and more like a product owner who can execute directly.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/thinking-beyond-frameworks-casey&quot;&gt;Casey Winters&lt;/a&gt; offered the sharpest version of this: if you thought the PM job was just filling in frameworks and collecting promotions, then yes, AI will replace you. The real PM job, the one requiring genuine subject matter expertise, is the least likely to be replaced.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/the-non-technical-pms-guide-to-building-with-cursor&quot;&gt;Zevi Arnovitz&lt;/a&gt; pushed back strongly against the concern that AI weakens PM skills, arguing instead that it&apos;s a collaborative learning opportunity. He sees AI-assisted building as a way for PMs to deepen their craft, not atrophy it. I think he&apos;s partly right. But the risk of atrophy is real for people who skip the fundamentals entirely and go straight to AI-generated outputs without understanding what good looks like.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Where I land&lt;/h2&gt;
&lt;p&gt;I&apos;ve been thinking about this a lot, partly because I&apos;ve lived through enough technology shifts to know that the conventional wisdom is usually wrong in at least one important way.&lt;/p&gt;
&lt;p&gt;Here&apos;s my take: experienced Product Managers who keep up with the technology are going to become more in demand, not less. The &quot;human&quot; skills that make a great PM, judgment, taste, persuasion, the ability to synthesise conflicting signals into a coherent direction, don&apos;t get replaced by AI. They get amplified by it.&lt;/p&gt;
&lt;p&gt;The reason is straightforward. If AI handles the execution burden, the research synthesis, the first drafts, the data crunching, the documentation, then experienced PMs can finally step back from the production line and focus on what they should have been doing all along: orchestration. Setting direction. Making judgment calls. Editing rather than writing. That&apos;s a 10x opportunity for people who have the foundational skills to take advantage of it.&lt;/p&gt;
&lt;p&gt;But, and this is the critical caveat, only if they actually engage with the tools. The PMs who will struggle are the ones who either refuse to use AI (and get outpaced) or rely on it blindly (and lose their edge). The sweet spot is what Mollick describes: being the human in the loop, with genuine expertise about when to trust the output and when to override it.&lt;/p&gt;
&lt;p&gt;The bifurcation is real. Administrative PMs are in trouble. Strategic, taste-driven, judgment-heavy PMs are entering their golden age. As &lt;a href=&quot;https://swkhan.medium.com/ai-is-not-going-to-fix-product-management-but-you-can-heres-how-7e06f102b35d&quot;&gt;Saeed Khan argues&lt;/a&gt;, AI won&apos;t magically fix role definitions, bad objectives, or lack of strategy, those are human problems that require human solutions. The uncomfortable middle ground is that most PMs are a bit of both, and the transition isn&apos;t going to be comfortable.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The junior PM problem&lt;/h2&gt;
&lt;p&gt;There&apos;s a question that almost nobody in this debate is addressing honestly, and it&apos;s the one that worries me most: what happens to the people who haven&apos;t gotten good yet?&lt;/p&gt;
&lt;p&gt;Every optimistic take on AI and product management, including mine, rests on the same assumption: that experienced PMs with strong judgment and taste will thrive. Fine. But where do experienced PMs come from? They come from junior PMs who were, at one point, not very good at the job.&lt;/p&gt;
&lt;p&gt;The basic premise of career development in almost every profession has always been the same. When you start, you&apos;re bad at it. The company understands you&apos;re bad at it. They pay you to do enough of the basic work, the ticket grooming, the meeting notes, the stakeholder chasing, the first-draft PRDs, with the expectation that over time you&apos;ll learn from it, develop judgment, and become genuinely valuable. The grunt work isn&apos;t just work. It&apos;s training.&lt;/p&gt;
&lt;p&gt;Now companies are looking at that same grunt work and seeing that AI can do it faster, cheaper, and without needing a desk. The signal from hiring managers is becoming difficult to ignore: they&apos;re no longer prepared to pay someone to be bad at the job long enough for them to get good. Junior PM roles are being cut or simply not backfilled. The entry-level pipeline is narrowing.&lt;/p&gt;
&lt;p&gt;This is, of course, extremely short-sighted. The senior PMs everyone is so keen to retain won&apos;t be around forever. They&apos;ll move on, burn out, retire, or get poached. And if there&apos;s nobody coming up behind them, nobody who spent two years in the trenches learning what a good user story looks like by writing five hundred bad ones, then you&apos;ve got a succession crisis dressed up as a cost saving.&lt;/p&gt;
&lt;p&gt;It also circles back to the taste question raised earlier. If taste requires reps, and the reps are being automated away, how does the next generation develop the judgment that everyone agrees is irreplaceable? You can&apos;t edit what you&apos;ve never written. You can&apos;t curate if you&apos;ve never built. The &quot;Editor-in-Chief&quot; thesis is compelling for people who already have twenty years of context. It&apos;s a dead end for someone in their first role.&lt;/p&gt;
&lt;p&gt;I don&apos;t have a clean answer to this. But I think it&apos;s the most important structural question the profession faces. The debate about whether AI will replace experienced PMs is interesting. The question of whether we&apos;re quietly dismantling the path to becoming one is urgent.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The bottom line&lt;/h2&gt;
&lt;p&gt;The PM skills debate isn&apos;t really about AI at all. It&apos;s about something the profession has been avoiding for years: what is the actual, irreducible value of a Product Manager?&lt;/p&gt;
&lt;p&gt;AI is forcing that question into the open. The coordination gets automated. The strategy gets challenged. What remains is judgment, taste, and the ability to make things happen through people. For experienced PMs who engage with the tools, that&apos;s a genuinely exciting shift. The work gets harder, but it also gets closer to the work that matters.&lt;/p&gt;
&lt;p&gt;The problem is that this optimism only holds if you zoom in on the people who are already good. Zoom out and the picture is more troubling. If the profession celebrates the rise of the &quot;Editor-in-Chief&quot; PM while quietly eliminating the junior roles where people learn to write in the first place, we&apos;re not evolving the discipline. We&apos;re hollowing it out.&lt;/p&gt;
&lt;p&gt;The companies that get this right will be the ones who recognise that AI doesn&apos;t remove the need to develop people. It changes how you do it. The grunt work might look different, the apprenticeship might be shorter, the tools might be better. But the principle remains: you have to let people be bad at something long enough for them to get good. Any organisation that forgets that is saving money today and buying a talent crisis tomorrow.&lt;/p&gt;
&lt;p&gt;So yes, experienced PMs who embrace AI are entering a golden age. But the real test of the profession isn&apos;t whether the current generation thrives. It&apos;s whether we&apos;re building the conditions for the next one to exist at all.&lt;/p&gt;
</content:encoded><category>AI</category><category>Product Management</category><category>PM Skills</category><category>Strategy</category><category>Lenny&apos;s Podcast</category><author>Steve James</author></item><item><title>Speak Up, Write It Down, Put It Out There</title><link>https://stvpj.com/blog/speaking-up/</link><guid isPermaLink="true">https://stvpj.com/blog/speaking-up/</guid><description>Why forcing yourself to articulate your thinking is the most underrated skill in your career. From rubber duck debugging to publishing online, the mechanism is the same: saying it out loud makes the thinking better.</description><pubDate>Sat, 21 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Early in my career, I received a piece of feedback that stuck with me. It was the kind of thing that stings a little because you know it&apos;s true:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;I would like Steve to be more forceful and vocal in meetings. In my eyes it appears that Steve providing his insights earlier in discussions could at times &apos;short-cut&apos; meetings.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I knew immediately he was right. It wasn&apos;t one of those bits of feedback you file away and rediscover years later with fresh eyes. It landed with the uncomfortable clarity of something I&apos;d already been half-aware of. I&apos;d sit in meetings, form thoughts, wait for the perfect moment to contribute, and then watch as the conversation moved on without me. Or I&apos;d speak up late, only to find that the group had already landed on what I&apos;d been thinking ten minutes earlier. The feedback didn&apos;t tell me anything new. It just said it out loud, which, ironically, is exactly what I&apos;d been failing to do.&lt;/p&gt;
&lt;p&gt;That realisation sent me down a path I&apos;m still on: articulating my thinking, out loud, to other people, in writing, in public, because it changes the quality of the thinking itself.&lt;/p&gt;
&lt;h2&gt;Why putting your thinking out there matters&lt;/h2&gt;
&lt;p&gt;Let&apos;s start with what happens when people do articulate their thinking, both for themselves and for the people around them.&lt;/p&gt;
&lt;p&gt;According to &lt;a href=&quot;https://www.inc.com/melanie-curtin/employees-who-feel-heard-are-46x-more-likely-to-feel-empowered-to-do-their-best-work.html&quot;&gt;a Salesforce study reported in &lt;em&gt;Inc.&lt;/em&gt;&lt;/a&gt;, employees who feel their voice is heard at work are 4.6 times more likely to feel empowered to perform their best work. That&apos;s not just a nice-to-have. It suggests that the act of contributing, and being received, fundamentally changes how people experience their own capability. Meanwhile, &lt;a href=&quot;https://www.businesswire.com/news/home/20240221824645/en/Grammarly-and-Harris-Poll-Find-Using-Generative-AI-for-Communication-Could-Save-Up-to-1.6-Trillion-Annually-in-U.S.-Productivity&quot;&gt;Grammarly&apos;s State of Business Communication report&lt;/a&gt; found that 66% of knowledge workers and 72% of leaders wish their organisations would invest in tools to help them communicate more effectively. The appetite is there. People want to share their thinking. The friction is in the how.&lt;/p&gt;
&lt;p&gt;Deb Liu, author of &lt;a href=&quot;https://debliu.com/&quot;&gt;&lt;em&gt;Take Back Your Power&lt;/em&gt;&lt;/a&gt;, captures a version of this tension. She tells the story of a talented colleague who kept getting overlooked: &quot;Every time she came up for promotion or calibration, people were like, &apos;Oh, what does she do?&apos; And it was because she was not good at broadcasting or explaining what she does.&quot; This isn&apos;t a story about someone who lacked insight. It&apos;s a story about insight that never made it out of one person&apos;s head and into the space where others could benefit from it.&lt;/p&gt;
&lt;p&gt;That&apos;s the real opportunity here. Articulating your thinking doesn&apos;t just help your career, though it does that too. It sharpens the thinking itself. Every thought you put into words is a thought you&apos;ve been forced to examine, structure, and stress-test. And every thought you share is one that other people can build on, challenge, or redirect. The value isn&apos;t just in being heard. It&apos;s in what the act of speaking does to the quality of your ideas.&lt;/p&gt;
&lt;h2&gt;Why articulation changes your thinking&lt;/h2&gt;
&lt;p&gt;Here&apos;s where things get interesting. Software engineers have known this for decades, through a concept called &lt;a href=&quot;https://en.wikipedia.org/wiki/Rubber_duck_debugging&quot;&gt;rubber duck debugging&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The idea, widely attributed to &lt;a href=&quot;https://pragprog.com/titles/tpp20/the-pragmatic-programmer-20th-anniversary-edition/&quot;&gt;&lt;em&gt;The Pragmatic Programmer&lt;/em&gt;&lt;/a&gt; by Andrew Hunt and David Thomas, is beautifully simple: when you&apos;re stuck on a bug, you explain your problem aloud to a rubber duck. That&apos;s it. You talk to a duck.&lt;/p&gt;
&lt;p&gt;It sounds absurd, but it works because of something fundamental about how our brains process information. When you&apos;re scanning code silently, you&apos;re in &quot;pattern-matching mode&quot;, your eyes are moving, you feel productive, but you&apos;re glossing over assumptions. The moment you force yourself to explain the problem step by step, as if to someone who knows nothing about your code, you shift into &quot;explaining mode.&quot; You have to be explicit about what you &lt;em&gt;think&lt;/em&gt; is happening versus what&apos;s &lt;em&gt;actually&lt;/em&gt; happening. And that&apos;s usually where the flaw reveals itself.&lt;/p&gt;
&lt;p&gt;Many developers find they solve the problem halfway through explaining it. Sometimes before they even finish the question. It&apos;s also why posting a question on Stack Overflow and then immediately finding the answer yourself is such a universal experience. The discipline of framing the problem clearly is often all it takes.&lt;/p&gt;
&lt;p&gt;This isn&apos;t a programming trick. It&apos;s a thinking trick. And it maps directly onto that meeting feedback I received. I was doing the silent equivalent of scanning code, processing information internally, pattern-matching, feeling like I was contributing by listening carefully. But the act of forcing my thoughts into words, into the room, would have sharpened them in ways that internal processing simply couldn&apos;t.&lt;/p&gt;
&lt;h2&gt;Getting started (when it feels impossible)&lt;/h2&gt;
&lt;p&gt;If you&apos;re reading this and thinking &quot;that&apos;s easy for you to say,&quot; I hear you. Speaking up is genuinely uncomfortable, especially in environments where you feel junior, uncertain, or simply outnumbered.&lt;/p&gt;
&lt;p&gt;So how do you actually start? Claire Hughes Johnson, in her book &lt;a href=&quot;https://press.stripe.com/scaling-people&quot;&gt;&lt;em&gt;Scaling People&lt;/em&gt;&lt;/a&gt;, offers a deceptively simple entry point: ask a question. A question is not threatening. It doesn&apos;t require you to have the answer. It just forces the room to examine an assumption. That&apos;s the meeting equivalent of rubber ducking. You&apos;re not declaring anything. You&apos;re just making implicit thinking explicit.&lt;/p&gt;
&lt;p&gt;The fear, of course, is that you&apos;ll stumble. That you&apos;ll start a sentence and lose the thread halfway through, and everyone will notice. But research on the &lt;a href=&quot;https://en.wikipedia.org/wiki/Peak%E2%80%93end_rule&quot;&gt;peak-end rule&lt;/a&gt; suggests otherwise: people remember how an experience ends far more than how it begins. That awkward pause where you gathered your thoughts? Nobody remembers it. They remember where you landed.&lt;/p&gt;
&lt;p&gt;And here&apos;s the thing that took me longest to learn: the words matter less than you think. What matters is whether you&apos;re genuinely present in the conversation or holding back out of self-protection. People can tell the difference. A half-formed thought offered with genuine curiosity lands better than a polished point delivered defensively. The barrier to speaking up is almost always internal, and the cost of staying silent is almost always higher than you think.&lt;/p&gt;
&lt;h2&gt;The introvert question&lt;/h2&gt;
&lt;p&gt;Everything I&apos;ve said so far assumes a fairly specific kind of person: someone who has thoughts but holds them back. But not everyone processes information the same way.
Some people think by talking. Others think by reflecting. Telling introverts to &quot;just speak up more&quot; can feel like telling left-handed people to write with their right hand.&lt;/p&gt;
&lt;p&gt;Donna Lichaw, in her book &lt;a href=&quot;https://www.donnalichaw.com/the-leaders-journey&quot;&gt;&lt;em&gt;The Leader&apos;s Journey&lt;/em&gt;&lt;/a&gt;, tells the story of a leader who &quot;was so quiet that her team thought she was not interested in them... it really was detrimental to them all working together.&quot; But the solution wasn&apos;t to force her into a different personality. It was communication about process: &quot;She just started communicating with them more about, &apos;Hey, this is my style. I&apos;m a little slower. I often need a couple of hours to really process things. I&apos;m here, and I want you to know that.&apos;&quot;&lt;/p&gt;
&lt;p&gt;This feels like a crucial middle path. The goal isn&apos;t to turn everyone into the loudest person in the room. It&apos;s to make your thinking visible, whatever that looks like for you. For some people, that&apos;s speaking up in the moment. For others, it&apos;s following up after with a considered email. For others still, it&apos;s writing.&lt;/p&gt;
&lt;h2&gt;From speaking up to writing it down&lt;/h2&gt;
&lt;p&gt;Deb Liu&apos;s manager gave her a piece of advice that changed her career trajectory: &quot;Just write what you repeat. If you say something more than once, just write it down.&quot;&lt;/p&gt;
&lt;p&gt;This led to years of monthly internal posts, which eventually became external publishing, and ultimately &lt;a href=&quot;https://debliu.com/&quot;&gt;her book&lt;/a&gt;. The progression is worth examining: she didn&apos;t start by writing for the public. She started by capturing thoughts she was already having and putting them somewhere others could find them.&lt;/p&gt;
&lt;p&gt;Ed Elson, on a &lt;a href=&quot;https://www.youtube.com/watch?v=tLVgj2_jM70&quot;&gt;recent podcast with Kyla Scanlon&lt;/a&gt;, pushes this idea further:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;If you want to connect the dots and be smarter about what is happening in the world, you should just make yourself speak more. Like, you should say more things at work. When you&apos;re in a meeting, you should just force yourself to say something, have an opinion.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And then the leap to public:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;You should do it online. Like, you should post on LinkedIn. You should start a newsletter. Like, I feel like the more you hold yourself accountable to producing thoughts and ideas, which is really uncomfortable to do, but the more you force yourself to do it, I think it really helps you connect the dots and also develop your own perspective.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;What Ed is describing is rubber ducking at scale. When you write for an audience, even a small one, you&apos;re forced to examine your assumptions more rigorously than when the thoughts stay in your head. You have to structure your argument. You have to anticipate objections. You have to decide what you actually believe versus what you&apos;re merely parroting from someone else.&lt;/p&gt;
&lt;p&gt;The accountability mechanism matters too. Deb Liu had a monthly cadence. Ed talks about holding yourself accountable to &quot;producing thoughts and ideas.&quot; The regularity is what transforms occasional insight into a consistent practice of thinking clearly.&lt;/p&gt;
&lt;h2&gt;The critical caveat: informed takes, not hot takes&lt;/h2&gt;
&lt;p&gt;Now, I know what you might be thinking. Doesn&apos;t the internet already have enough people sharing their opinions? Do we really need more LinkedIn posts?&lt;/p&gt;
&lt;p&gt;Kyla Scanlon provides the essential correction:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;I don&apos;t know if we need like more hot takes. I think we need more informed takes.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is the distinction that separates valuable public thinking from noise. Hot takes can spark lively debate and provide fresh perspectives, but they also run the risk of oversimplifying complex issues or sensationalising topics for clicks.&lt;/p&gt;
&lt;p&gt;And to be clear, none of this is an argument for speaking up purely for the sake of being heard. If you&apos;re in a meeting about a topic you know nothing about, interrupting the conversation just to register your presence helps nobody. That&apos;s not articulation, it&apos;s performance. The goal is to share thinking that you&apos;ve actually done, not to fill silence with sound. There&apos;s a meaningful difference between &quot;I haven&apos;t spoken yet so I should say something&quot; and &quot;I have a perspective here that might be useful.&quot;&lt;/p&gt;
&lt;p&gt;But here&apos;s where the rubber duck metaphor becomes particularly powerful. The whole point of rubber ducking is that it forces you to slow down and examine your assumptions before you present your conclusion. The discipline of articulating your thinking carefully, whether to a duck, a colleague, or a LinkedIn audience, is itself the preparation that transforms a hot take into an informed one.&lt;/p&gt;
&lt;p&gt;The answer isn&apos;t &quot;share less.&quot; It&apos;s &quot;think more before you share.&quot; And the best way to think more is, paradoxically, to commit to sharing regularly, because that commitment forces the preparation.&lt;/p&gt;
&lt;h2&gt;A progression, not a leap&lt;/h2&gt;
&lt;p&gt;Looking back at all of this, I think the real insight is that speaking up isn&apos;t a single skill. It&apos;s a progression:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Talk to the duck.&lt;/strong&gt; When you&apos;re stuck on a problem, explain it out loud. To a colleague, to a rubber duck, to your dog. The act of articulating the problem will often reveal the solution.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Speak in the room.&lt;/strong&gt; In meetings, force yourself to contribute earlier. Ask a question. Share an observation. Own it as your perspective, not a universal truth. The thought you&apos;re holding back might be the one that short-cuts the entire discussion.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Write what you repeat.&lt;/strong&gt; If you find yourself making the same point in multiple conversations, write it down. Share it internally. You&apos;re already doing the thinking; capturing it creates leverage.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Put it out there.&lt;/strong&gt; When you&apos;ve built confidence in your perspective through the earlier stages, share it publicly. Start a newsletter. Post on LinkedIn. The accountability of a public audience will sharpen your thinking further.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Each stage builds on the last. Each one involves a slightly larger audience and a slightly higher bar for clarity. And each one delivers the same core benefit: forcing yourself to articulate your thinking makes the thinking better.&lt;/p&gt;
&lt;h2&gt;Back to the feedback&lt;/h2&gt;
&lt;p&gt;I think about that career feedback sometimes. Not because it took me ages to absorb, it didn&apos;t. But because understanding something intellectually and actually changing your behaviour are two very different things. Knowing you should speak up earlier in meetings is easy. Doing it, consistently, when every instinct is telling you to wait, to refine, to hold back until you&apos;re certain, that&apos;s the work.&lt;/p&gt;
&lt;p&gt;Rubber duck debugging works because it forces you to be explicit about what you think is true. Speaking up in meetings works for the same reason. Writing works for the same reason. Publishing works for the same reason.&lt;/p&gt;
&lt;p&gt;The medium changes. The mechanism doesn&apos;t.&lt;/p&gt;
&lt;p&gt;So speak up. Write it down. Put it out there. Not because the world needs more noise, but because your thinking deserves to be examined, and the only way to examine it properly is to say it out loud.&lt;/p&gt;
</content:encoded><category>Communication</category><category>Product Management</category><category>Writing</category><category>Career Development</category><category>Leadership</category><author>Steve James</author></item><item><title>The Product Sense Myth: Why Your Best PMs Aren&apos;t Born, They&apos;re Built</title><link>https://stvpj.com/blog/product-sense/</link><guid isPermaLink="true">https://stvpj.com/blog/product-sense/</guid><description>Product sense isn&apos;t a mystical gift some people are born with. It&apos;s a learnable skill built through deliberate practice, user exposure, and reflection. Here&apos;s how to develop it.</description><pubDate>Thu, 05 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We&apos;ve all heard it. Someone in a hiring debrief leans back and says, &quot;They just have great product sense.&quot;&lt;/p&gt;
&lt;p&gt;It sounds meaningful. Decisive, even. But what does it actually mean?&lt;/p&gt;
&lt;p&gt;Usually, it&apos;s a way of saying &quot;I can&apos;t quite explain why I like this person, but I do.&quot; And that&apos;s a problem. Because when we treat product sense as some kind of innate gift, we stop asking whether it can be taught. Spoiler: it can.&lt;/p&gt;
&lt;h2&gt;Why &quot;Product Sense&quot; Is a Confusing Term&lt;/h2&gt;
&lt;p&gt;Here&apos;s the thing. Product sense does describe something real. When a seasoned PM looks at a prototype and immediately spots what&apos;s wrong, that&apos;s not luck. Their brain is doing something useful, pulling together years of experience into a quick judgement.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://cwodtke.com/building-product-sense-why-your-gut-needs-an-education/&quot;&gt;Christina Wodtke explains this well&lt;/a&gt;. She draws on Davenport and Prusak&apos;s concept that intuition is &quot;compressed experience&quot;, and points out that when experienced PMs react quickly, &quot;they&apos;re not being mystical. Their brain is processing hundreds of micro-signals: user flow friction, business model misalignment, technical complexity, competitive dynamics. Years of experience get compressed into a split-second gut reaction.&quot;&lt;/p&gt;
&lt;p&gt;That&apos;s genuinely valuable. But by calling it &quot;sense&quot;, we make it sound like something you&apos;re born with. And that&apos;s where we go wrong.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/product-sense&quot;&gt;Jules Walter&lt;/a&gt;, who&apos;s spent over a decade building products at YouTube, Slack, and Google, is direct about this: &quot;Contrary to what a lot of PMs believe, product sense is not something you need to be born with. It&apos;s a learned skill, just like any other PM skill.&quot;&lt;/p&gt;
&lt;p&gt;So why do we keep treating it like magic?&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What Baseball Can Teach Us About Product Hiring&lt;/h2&gt;
&lt;p&gt;Bear with me here, because this analogy is worth it.&lt;/p&gt;
&lt;p&gt;Before &lt;em&gt;Moneyball&lt;/em&gt; came along, baseball scouts picked players based on gut feel. They watched how someone moved, how they looked, whether they had that indefinable &quot;it factor&quot;. They were experienced. They were confident. And they were often wrong.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://yahiahassan.substack.com/p/is-product-sense-overrated&quot;&gt;Yahia Hassan makes this connection&lt;/a&gt; in his critique of how we think about product sense. He asks: &quot;Could a baseball team win by constructing a roster based solely on their scouts&apos; &apos;sharp eye for talent&apos;? What would that look like? Selecting players based on their physical appearance? Or perceived skill?&quot; His point is that product management is making the same mistake baseball made before the data revolution.&lt;/p&gt;
&lt;p&gt;Now, I&apos;m not saying intuition is useless. Far from it. But when we hire PMs based on whether they &quot;seem&quot; to have good product sense, we&apos;re doing something similar to those old baseball scouts. We&apos;re mistaking confidence for competence and experience for ability.&lt;/p&gt;
&lt;p&gt;Hassan&apos;s takeaway is pointed: &quot;The key takeaway here is, like a baseball GM, product managers need to use their product sense as a complement to being data-informed. This is why I think the ability to understand what metrics are important and measure them is what separates great PMs from the pack. More so than product sense.&quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;So What Actually Is Product Sense?&lt;/h2&gt;
&lt;p&gt;If we&apos;re going to develop something, we need to define it properly. And vague definitions don&apos;t help anyone.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/product-sense&quot;&gt;Jules Walter offers the clearest one&lt;/a&gt; I&apos;ve come across:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Product sense is the skill of consistently being able to craft products (or make changes to existing products) that have the intended impact on their users.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Notice what&apos;s useful here. It&apos;s measurable. You can actually track whether your product bets are paying off more often than they used to. If you started out being right two times in ten and now you&apos;re right eight times in ten, your product sense has improved. Simple as that.&lt;/p&gt;
&lt;p&gt;Walter breaks it down into two parts:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;(1) empathy to discover meaningful user needs and&lt;/p&gt;
&lt;p&gt;(2) creativity to come up with solutions that effectively address those needs.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Neither of those is mystical. Both can be worked on.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://austinyang.co/what-product-sense-is-and-isnt/&quot;&gt;Austin Yang takes a slightly different angle&lt;/a&gt;. He finds the usual definition, &quot;the ability to understand what makes a product great&quot;, to be &quot;a bit self-repeating and ambiguous.&quot; Instead, he argues that &quot;product sense should be equated to a person&apos;s ability to do certain things when limited information is given. The ability to outline all potential paths and obstacles will take a product closer to its destination, even in extreme ambiguity.&quot;&lt;/p&gt;
&lt;p&gt;I like this framing because it&apos;s practical. It&apos;s not about having the right answer. It&apos;s about navigating uncertainty well. And that&apos;s something you can get better at.&lt;/p&gt;
&lt;p&gt;As Yang puts it: &quot;Product sense is a skill you refine through practice, not a talent you are born with.&quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;How Meta Thinks About It&lt;/h2&gt;
&lt;p&gt;Say what you will about Meta, but they&apos;ve thought hard about what makes a good PM.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/vision-conviction-hype-mihika-kapoor&quot;&gt;Mihika Kapoor&lt;/a&gt; who led products at Meta before joining Figma, explains their approach: &quot;Meta basically distilled the product role into two core capabilities: product sense and execution.&quot;&lt;/p&gt;
&lt;p&gt;What&apos;s interesting is how she defines product sense in practice. It&apos;s not some innate gift. It&apos;s &quot;just like having good intuition. And so there&apos;s kind of this question about like, okay, how do you build up intuition? And I think that it&apos;s just by like, having this insatiable curiosity and talking to users at every chance you get.&quot;&lt;/p&gt;
&lt;p&gt;That&apos;s the key insight. Intuition isn&apos;t magic. It&apos;s pattern recognition built through exposure. And exposure is something you can deliberately seek out.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/slack-founder-stewart-butterfield&quot;&gt;Stewart Butterfield&lt;/a&gt;, the co-founder of Slack, uses a cooking analogy that I find helpful. He compares developing product taste to becoming a chef. No one&apos;s born knowing how to balance flavours or time multiple dishes. You learn by doing it over and over again.&lt;/p&gt;
&lt;p&gt;He also makes a sharp observation: &quot;Most people don&apos;t have good taste and don&apos;t invest&quot; in developing it. Which means if you do invest, you&apos;ve got an edge.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;How to Actually Get Better at This&lt;/h2&gt;
&lt;p&gt;Alright, so product sense is learnable. How do you learn it?&lt;/p&gt;
&lt;p&gt;Wodtke argues that most people go about it wrong. They read articles and books about product thinking, which she compares to trying to learn tennis by studying physics. You need reps.&lt;/p&gt;
&lt;p&gt;Here&apos;s what actually works:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use products deliberately.&lt;/strong&gt; Every app on your phone is a case study. Someone made decisions about every button, every flow, every piece of copy. Start asking why. What problem is this solving? Why did they make this trade-off instead of another one?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Spend time with users.&lt;/strong&gt; Not reading summaries. Not watching highlight reels. Actually sitting with people, watching them use your product, listening to how they describe their problems. Kapoor&apos;s advice about &quot;talking to users at every chance you get&quot; is spot on.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pay attention to trends.&lt;/strong&gt; &lt;a href=&quot;https://www.lennysnewsletter.com/p/product-sense&quot;&gt;Walter recommends&lt;/a&gt; tracking both big shifts, like new platforms or regulations, and smaller patterns in your specific area. The PMs who seem to predict the future usually aren&apos;t psychic. They&apos;ve just been paying attention longer.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Reflect on your bets.&lt;/strong&gt; When you make a product decision, write down what you expect to happen. Then check back later. This turns vague intuition into something testable.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;h2&gt;Data vs. Intuition Is a False Choice&lt;/h2&gt;
&lt;p&gt;One of the most common misunderstandings is that you have to pick a side. Either you&apos;re a &quot;data-driven&quot; PM or you trust your gut. But that&apos;s a false choice.&lt;/p&gt;
&lt;p&gt;Marcy Farrell, writing for &lt;a href=&quot;https://www.harvardbusiness.org/data-and-intuition-good-decisions-need-both/&quot;&gt;Harvard Business Publishing&lt;/a&gt; puts it well:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Intuition is often seen as the opposite of reason, and when cast in this binary way, intuition is often defined as having no place in the age of science and data.&quot; But that framing misses the point. &quot;... many decisions are too complex to rely on metrics or gut feelings alone. The best leaders and decision makers use both data and intuition to their advantage.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Laura Huang&apos;s research at Harvard Business School shows that gut feelings work best &quot;in highly uncertain circumstances where further data gathering won&apos;t sway the decision maker one way or the other.&quot;&lt;/p&gt;
&lt;p&gt;But there&apos;s a catch. The same piece warns that &quot;at its best, intuition is a powerful form of pattern recognition, something human brains are wired to do. But when not managed well, pattern recognition and trusting one&apos;s gut may lead to bias and incomplete or overly simplistic thinking.&quot;&lt;/p&gt;
&lt;p&gt;The best PMs use data to sharpen their intuition, and intuition to know what data to look for.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What This Means for Hiring&lt;/h2&gt;
&lt;p&gt;If product sense is learnable, we need to rethink how we hire.&lt;/p&gt;
&lt;p&gt;Right now, most companies treat product sense interviews as a test of natural ability. Give someone an ambiguous prompt and see if they &quot;get it.&quot;&lt;/p&gt;
&lt;p&gt;But as &lt;a href=&quot;https://austinyang.co/what-product-sense-is-and-isnt/&quot;&gt;Yang points out&lt;/a&gt;, junior PMs often haven&apos;t had enough experience to develop meaningful intuition yet. Evaluating them on product sense isn&apos;t identifying talent. It&apos;s identifying people who&apos;ve been lucky enough to get relevant experience somewhere else.&lt;/p&gt;
&lt;p&gt;This creates a cycle that rewards the usual backgrounds and filters out everyone else. Then we congratulate ourselves on finding &quot;natural talent&quot;, when really we&apos;ve just found people who started with more advantages.&lt;/p&gt;
&lt;p&gt;A better approach? Hire for curiosity and learning ability. Invest in the experiences that build product judgement over time. Stop treating product sense as something people either have or don&apos;t.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Bottom Line&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.linkedin.com/posts/scottbelsky_products-that-are-built-with-empathy-can-activity-7062058569558315008-kBEF&quot;&gt;Scott Belsky&lt;/a&gt;, Adobe&apos;s Chief Product Officer, captures what good product thinking actually looks like:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Empathy &amp;gt; passion when building a new product. Empathy for those suffering from a problem outperforms passion for a potential solution.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;When products fail despite hard work, he says, &quot;usually, it&apos;s due to a lack of empathy for the customer.&quot; The answer is to go &quot;shoulder to shoulder with them to identify this problem first before crafting your vision. You must talk to customers, watch them go about their day, and ask why they&apos;re struggling.&quot;&lt;/p&gt;
&lt;p&gt;That&apos;s not magic. That&apos;s craft. And craft can be learned.&lt;/p&gt;
&lt;p&gt;Product sense isn&apos;t a gift some people are born with. It&apos;s the result of caring about users, spending time with them, thinking hard about problems, and building feedback loops that make your judgement sharper over time.&lt;/p&gt;
&lt;p&gt;Anyone can do that. The question is whether we&apos;re willing to put in the work, and whether our industry is ready to stop pretending otherwise.&lt;/p&gt;
</content:encoded><category>Product Sense</category><category>Product Management</category><category>Hiring</category><category>Intuition</category><category>Decision Making</category><category>PM Skills</category><author>Steve James</author></item><item><title>Adventures with Clawdbot: From Autonomy to Economic Reality</title><link>https://stvpj.com/blog/adventures-with-ai-assistant/</link><guid isPermaLink="true">https://stvpj.com/blog/adventures-with-ai-assistant/</guid><description>A deep dive into building an Ai Assistant, the move from rigid automation to fluid agency, and the hard lessons learned about token economics and security.</description><pubDate>Wed, 28 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;The New AI Hotness&lt;/h2&gt;
&lt;p&gt;Unless you&apos;ve been living under a rock in the tech world lately, you&apos;ve likely heard about the latest AI obsession: &lt;strong&gt;&quot;Clawdbot&quot;&lt;/strong&gt; or &lt;strong&gt;&quot;Moltbot&quot;&lt;/strong&gt; as it’s now called to avoid Anthropic&apos;s wrath. It is an open-source framework designed to give an AI model a persistent memory and personality, promising a future where your AI isn&apos;t just a tab in your browser, but a permanent, thinking resident on your hardware.&lt;/p&gt;
&lt;p&gt;For the last few months, I’ve been exploring options that go far beyond simple server scripts. I wanted a 24/7 assistant that could &lt;strong&gt;10X my ability to do stuff&lt;/strong&gt;. A proactive partner that didn&apos;t just wait for a prompt, but actually &lt;em&gt;thought&lt;/em&gt; and acted on my behalf. I wanted an agent with a &quot;Brain.&quot;&lt;/p&gt;
&lt;h2&gt;Beyond the Sentinel: The Agentic Pivot&lt;/h2&gt;
&lt;p&gt;If you&apos;ve visited my website, you may have read about how I spent considerable time building the &lt;strong&gt;&lt;a href=&quot;https://stvpj.com/work/homelab-sentinel&quot;&gt;Autonomous Homelab Sentinel&lt;/a&gt;&lt;/strong&gt;. That project uses &lt;strong&gt;n8n&lt;/strong&gt; for central orchestration, a web of hard-coded logic that, while effective, felt rigid. It was automation, but it wasn&apos;t &lt;em&gt;agency&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The arrival of Moltbot offered a chance to move &quot;up-stack.&quot; I repurposed an old 2012 Mac Mini to load Ubuntu Server and host &quot;&lt;strong&gt;Jennifer&lt;/strong&gt;&quot;, my resident agent. Unlike the node-based logic of n8n, Jennifer used a &lt;code&gt;SOUL.md&lt;/code&gt; file for personality and a &lt;code&gt;MEMORY.md&lt;/code&gt; file to maintain a persistent context across every interaction, turning my new mini server into the engine room for a full-blown personal OS.&lt;/p&gt;
&lt;h2&gt;The &quot;Astonishingly Simple&quot; 10X Assistant&lt;/h2&gt;
&lt;p&gt;The most striking thing about Jennifer wasn&apos;t just what she could do, but how &lt;strong&gt;astonishingly simple&lt;/strong&gt; it was to achieve. With traditional automation, you spend hours debugging nodes and logic gates. Even using Claude Code with n8n MCP integration and skills, it was still extremely time-consuming. With Jennifer, very technical tasks were just completed without any bother.&lt;/p&gt;
&lt;p&gt;Once Moltbot was set up—a process that included a simple Telegram integration, it didn&apos;t feel like working with a &quot;chatbot&quot; anymore. It felt like having a highly intelligent technical assistant on the payroll. I would simply tell her my goal, whether in the terminal, the web UI, or via Telegram, and the work happened behind the scenes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Life Logistics&lt;/strong&gt;: She integrated with and managed my to-do list, moving fluidly between conversation and structured data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Email Synthesis&lt;/strong&gt;: I gave her read-only access to my email inbox with instructions to identify anything that she felt she could help with. I soon noticed to-do items showing up in my list based on emails I’d received, without me lifting a finger.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Network Sentinel&lt;/strong&gt;: She monitored my entire network via read-only APIs and alerted me proactively about any issues.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Proactive Engineering&lt;/strong&gt;: Without me needing to spec out the &quot;how,&quot; Jennifer built and deployed three functional apps on a local web server (accessible anywhere via Tailscale): an &lt;strong&gt;Interactive Homelab Dashboard&lt;/strong&gt;, a &lt;strong&gt;Blogging Ideas Management&lt;/strong&gt; tool, and a &lt;strong&gt;Project Management Kanban app&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I didn&apos;t have to write requirements. I didn&apos;t have to put together a PRD. I discussed what I wanted to do and Jennifer just went and did it. Each of these mini &quot;Apps&quot; is well-designed and just works. All of this, from first loading clawdbot to everything integrated was achieved in a few hours.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./kanban.png&quot; alt=&quot;A visual representation of Jennifer&apos;s autonomous workflow and local app deployments&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The Security Shadow: Autonomy vs. Vulnerability&lt;/h2&gt;
&lt;p&gt;However, this level of power is a double-edged sword. To make Jennifer a &quot;10X&quot; assistant, I needed to give her deep access to my local machine.&lt;/p&gt;
&lt;p&gt;There are &quot;gaping holes&quot; in these early agentic frameworks that the community is still grappling with. By giving an AI shell access, you are creating a wide-open gateway into your network. If the framework is compromised, or the agent is &quot;jailbroken&quot; via prompt injection, you aren&apos;t just losing data, you have a persistent, autonomous actor working against you from inside your firewall.&lt;/p&gt;
&lt;h2&gt;The Economic Wall: Subscriptions vs. Reality&lt;/h2&gt;
&lt;p&gt;Even if you stomach the security risks, you hit the &lt;strong&gt;Agency Tax.&lt;/strong&gt; Here is the cold reality: Using an agent like Moltbot 24/7 essentially violates the &quot;personal use&quot; spirit of Anthropic’s Pro or Max subscription plans. Those plans are built for human-speed chatting, not high-frequency machine orchestration. To stay above board, you have to move to a &lt;strong&gt;Pay-As-You-Go API billing model.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This is when the reality hits, hard. When you can literally see your token costs moving in real-time as your assistant endlessly polls your environment to &quot;stay on top of everything,&quot; the dream becomes a budget item. To be a &quot;Jarvis,&quot; Jennifer has to stay &quot;alive,&quot; and that life is measured in millions of tokens.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;You want to give your assistant the best brain possible&lt;/strong&gt;, but using top-tier models (like Opus 4.5) for every background check is economically ruinous. I asked Jennifer to save money by dynamically up- and down-grading her model use, something Claude Code handles natively, and she enthusiastically agreed before promptly losing connection to all models entirely. Took about 20 mins to get that back up and running again.&lt;/p&gt;
&lt;h2&gt;Moving Up-Stack: The Current State&lt;/h2&gt;
&lt;p&gt;I’ve had to concede: until costs drop by an order of magnitude and security frameworks catch up, the 24/7 autonomous agent is a luxury I can’t justify. If an agent can&apos;t afford to &quot;listen&quot; proactively, it stops being an agent and goes back to being a high-latency chatbot.&lt;/p&gt;
&lt;p&gt;I’ve shifted back from &lt;strong&gt;Agency&lt;/strong&gt; to &lt;strong&gt;Tooling&lt;/strong&gt;. I now use &lt;strong&gt;Claude Code&lt;/strong&gt; on the Ubuntu Server, a human-triggered, high-reasoning engine. My monitors are back to being efficient, silent, and &lt;em&gt;free&lt;/em&gt; cron jobs. It lacks Jennifer’s &quot;human-speak,&quot; but it is &lt;strong&gt;Strategically Realistic&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The Jennifer experiment was a preview of the next age of product management. The tech is ready; the economics and security are simply waiting to catch up. I cannot wait until we get there.&lt;/p&gt;
</content:encoded><category>AI Strategy</category><category>Moltbot</category><category>Product Management</category><category>Automation</category><category>Security</category><author>Steve James</author></item><item><title>The Case for Strategic Realism: Why You Shouldn&apos;t Abandon the Ideal (Even When It Hurts)</title><link>https://stvpj.com/blog/strategic-realism-product-management/</link><guid isPermaLink="true">https://stvpj.com/blog/strategic-realism-product-management/</guid><description>It is easy to dismiss modern product theory as &apos;out of touch&apos; when you are drowning in bureaucracy. But giving up on the ideal isn&apos;t the answer, strategic realism is. Here is how to play the long game without losing your mind.</description><pubDate>Thu, 15 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;The &quot;Reality Gap&quot; in Product Management&lt;/h2&gt;
&lt;p&gt;There is a recurring thread I see on Reddit and in private PM Slack communities that goes something like this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&quot;Marty Cagan, Teresa Torres, and Mik Kersten are have great ideas, but they don&apos;t live in the real world. They don&apos;t know what it&apos;s like to work in &quot;my company&quot; where Legal needs six weeks to approve a survey and the CEO prioritises features based on who he played golf with last weekend.&quot;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I understand this sentiment. Viscerally.&lt;/p&gt;
&lt;p&gt;When you are fighting for inches of progress in an organisation that seems determined to move backwards, reading &lt;em&gt;Empowered&lt;/em&gt; or &lt;em&gt;Project to Product&lt;/em&gt; can feel less like inspiration and more like gaslighting. It creates a painful dissonance between the job you &lt;em&gt;thought&lt;/em&gt; you had and the job you &lt;em&gt;actually&lt;/em&gt; do.&lt;/p&gt;
&lt;p&gt;But I want to challenge the conclusion that many Product Managers draw from this. The conclusion usually is: &lt;em&gt;&quot;These theories are idealistic nonsense, so I should just stop trying and accept that Product Management is just taking orders.&quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;That is a mistake.&lt;/p&gt;
&lt;p&gt;The authors aren&apos;t wrong. The models aren&apos;t broken. The problem is that we are underestimating the sheer psychological weight of &lt;strong&gt;organisational muscle memory&lt;/strong&gt;. And more importantly, we are trying to sprint a marathon, burning ourselves out in the process.&lt;/p&gt;
&lt;p&gt;We need a new approach. Not a new framework, but a philosophy of &lt;strong&gt;Strategic Realism&lt;/strong&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Science of Why It Hurts&lt;/h2&gt;
&lt;p&gt;It is important to understand that the frustration you feel isn&apos;t just professional annoyance; it is a psychological response to structural resistance.&lt;/p&gt;
&lt;p&gt;Research into &lt;strong&gt;structural inertia theory&lt;/strong&gt; suggests that organisations are designed to resist change. It is a feature, not a bug. Older organisations have high inertia because their structures and routines are solidified to ensure survival and reliability. When you try to introduce &quot;continuous discovery&quot; or &quot;flow metrics&quot; into this environment, you aren&apos;t just changing a process; you are fighting the organisation&apos;s survival instinct.&lt;/p&gt;
&lt;p&gt;As I explored in my previous post on &lt;strong&gt;&lt;a href=&quot;https://stvpj.com/blog/resistance-change-kuhn-neuroscience&quot;&gt;The Neuroscience of Resistance to Change&lt;/a&gt;&lt;/strong&gt;, this resistance is deeply rooted in how human brains, and by extension, human organisations, process uncertainty. The &quot;muscle memory&quot; of the organisation will always pull it back to the status quo until a critical threshold of dissonance is reached.&lt;/p&gt;
&lt;p&gt;This fight has a quantifiable cost:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://www.capterra.com/resources/middle-manager-burnout-strategies/&quot;&gt;71% of middle managers report feeling burned out&lt;/a&gt;&lt;/strong&gt;, the highest of any employee group.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://finance.yahoo.com/news/dhr-global-report-reveals-drop-150000223.html&quot;&gt;&quot;Change fatigue&quot; is a primary driver of disengagement&lt;/a&gt;&lt;/strong&gt;, with nearly half of the global workforce reporting exhaustion severe enough to impact performance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://www.mckinsey.com/capabilities/transformation/our-insights/common-pitfalls-in-transformations-a-conversation-with-jon-garcia&quot;&gt;The failure rate for organisational change initiatives remains stubbornly high&lt;/a&gt;&lt;/strong&gt;, estimated around &lt;strong&gt;70%&lt;/strong&gt; for decades.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So, if you feel like you are banging your head against a wall, it’s because you are. But that doesn&apos;t mean the wall is immovable. It just means you can&apos;t knock it down with your forehead.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Breaking the Binary: A New Philosophy&lt;/h2&gt;
&lt;p&gt;If Option A is &quot;Naïve Idealism&quot; (trying to force perfection immediately and burning out) and Option B is &quot;Cynical Defeatism&quot; (giving up and becoming a feature factory), we are left stuck in the middle without a map.&lt;/p&gt;
&lt;p&gt;I suspect many of you are reading this and nodding along, perhaps a bit wearily. The dissonance between &quot;what good looks like&quot; and &quot;what we have to do today&quot; is the defining tension of our role right now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;That is why I&apos;m stealing the term: Strategic Realism.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I am proposing this terminology because we need a way to describe this coping mechanism that doesn&apos;t sound like failure. &quot;Compromising&quot; sounds weak. &quot;Giving up&quot; sounds defeated. But &lt;strong&gt;Strategic Realism&lt;/strong&gt; describes exactly what we are doing: making calculated, survival-based decisions to preserve our agency so we can fight another day.&lt;/p&gt;
&lt;p&gt;It is a badge of honour, not a mark of shame.&lt;/p&gt;
&lt;p&gt;Here is how to apply it:&lt;/p&gt;
&lt;h3&gt;1. Reframe Resistance as Muscle Memory&lt;/h3&gt;
&lt;p&gt;When a stakeholder demands a roadmap with fixed dates, they aren&apos;t necessarily being &quot;anti-Agile.&quot; They are exhibiting muscle memory. They have spent 20 years believing in a system where dates equal certainty.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;The Idealist&lt;/strong&gt; fights the date and loses trust.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The Strategic Realist&lt;/strong&gt; provides the date but wraps it in caveats, then works quietly to shorten the feedback loop so the date becomes irrelevant sooner.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;2. The &quot;Pocket of Air&quot; Strategy&lt;/h3&gt;
&lt;p&gt;You cannot oxygenate the whole ocean, but you can create a bubble.
Find one small area where you have autonomy, a specific feature, a single squad, a minor internal tool, and run it &quot;the right way.&quot;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Do the discovery.&lt;/li&gt;
&lt;li&gt;Measure the outcomes.&lt;/li&gt;
&lt;li&gt;Protect the team.
Creating this &quot;pocket of air&quot; proves that the model works without threatening the entire organisational immune system.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;3. Accept &quot;Good Enough&quot; for Now&lt;/h3&gt;
&lt;p&gt;This is the hardest part for high achievers. Sometimes, a B-minus outcome is a victory if it keeps you in the game.
If you manage to get &lt;em&gt;one&lt;/em&gt; customer interview done this month, that is infinitely better than zero. If you manage to shift the roadmap from &quot;features&quot; to &quot;problems&quot; for just &lt;em&gt;one&lt;/em&gt; quarter, take the win.
Don&apos;t measure your success against a case study in a book. Measure it against where your organisation was six months ago.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why You Must Not Give Up&lt;/h2&gt;
&lt;p&gt;The reason I push back against the Reddit cynicism is that &lt;strong&gt;the turning point is coming&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;As I wrote previously, we are in a transition period. The old models of &quot;project management&quot; are objectively failing to deliver value in the Age of Software. The companies that cling to them will eventually face an existential crisis.&lt;/p&gt;
&lt;p&gt;When that crisis hits, when the old muscle memory stops working, the organisation will panic. They will look for answers.&lt;/p&gt;
&lt;p&gt;If you have given up and become a cynical ticket-mover, you won&apos;t be able to help.
But if you have practiced &lt;strong&gt;Strategic Realism&lt;/strong&gt;, if you have kept the flame of &quot;true&quot; Product Management alive in your pocket of air, waiting for the right moment, you will be the one they turn to.&lt;/p&gt;
&lt;p&gt;Marty Cagan isn&apos;t wrong. He is just describing the destination. The terrain between here and there is swampy, foggy, and full of obstacles.&lt;/p&gt;
&lt;p&gt;Your job isn&apos;t to pretend the swamp doesn&apos;t exist. Your job is to navigate it without drowning.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Take the pragmatic approach for your sanity, but keep your eyes on the ideal. The industry needs you to survive the journey.&lt;/strong&gt;&lt;/p&gt;
</content:encoded><category>Agile</category><category>Product Management</category><category>Career Growth</category><category>Mental Health</category><category>Organisational Change</category><author>Steve James</author></item><item><title>The Innovation Paradox: Why You Can&apos;t Measure a Paradigm Shift</title><link>https://stvpj.com/blog/innovation-paradox-conviction/</link><guid isPermaLink="true">https://stvpj.com/blog/innovation-paradox-conviction/</guid><description>We are obsessed with validation. But while data is excellent at optimising the present, it is often terrible at predicting the future. True innovation requires swapping the safety of metrics for the risk of conviction.</description><pubDate>Mon, 05 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In the modern Product Management universe, &quot;Data-Driven&quot; has become a religious mantra. We don&apos;t just use data; we rely on it to absolve us of responsibility. We want A/B tests, validation metrics, and market signals to tell us exactly what to build next.&lt;/p&gt;
&lt;p&gt;It feels safe. It feels scientific. But there is a dangerous ceiling to this approach. A friend recently mentioned a thought that really stuck with me:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&quot;No great achievement would have ever survived a cost-benefit analysis.&quot;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This isn&apos;t just about money; it’s about &lt;strong&gt;certainty&lt;/strong&gt;. If you wait for the data to prove that a paradigm shift is definitely going to be a &quot;good idea&quot;, you have already missed the boat. Data is a record of the past, and you cannot measure the performance of a product that creates a behaviour that doesn&apos;t exist yet.&lt;/p&gt;
&lt;h3&gt;The Trap of the Local Maximum&lt;/h3&gt;
&lt;p&gt;In product theory, we often talk about the &lt;a href=&quot;https://cxl.com/blog/local-maximum/&quot;&gt;&lt;strong&gt;Local Maximum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./local-max-graph.png&quot; alt=&quot;Graph illustrating the local maximum versus the global maximum&quot; /&gt;
&lt;em&gt;Figure 1: Graph illustrating the local maximum versus the global maximum.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Imagine you are climbing a hill in the fog. You use your data (your altimeter) to make sure every step you take is &quot;up&quot;. You optimise your path perfectly, eventually reaching the very top of the hill. The data says you can go no higher. You have reached the peak.&lt;/p&gt;
&lt;p&gt;But because of the fog (the unknown future), you can’t see that you are standing on a small foothill, and right next to you is a massive mountain that soars ten times higher.&lt;/p&gt;
&lt;p&gt;This is what a purely data-driven approach does. It is incredible at &lt;strong&gt;optimisation&lt;/strong&gt; (climbing the current hill). It is terrible at &lt;strong&gt;innovation&lt;/strong&gt; (going back down the valley to climb the next mountain).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Data&lt;/strong&gt; could tell Blockbuster how to optimise late fees and store layouts.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data&lt;/strong&gt; could not tell them to abandon their stores and start a streaming service.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To move from the foothill to the mountain requires a temporary drop in metrics. It requires a leap of faith across the gap.&lt;/p&gt;
&lt;h3&gt;The Innovation Gap: Lessons from the Walkman&lt;/h3&gt;
&lt;p&gt;This is where the &quot;Product Manager as Scientist&quot; metaphor breaks down. In science, you observe existing phenomena. In innovation, you are trying to &lt;em&gt;create&lt;/em&gt; new phenomena.&lt;/p&gt;
&lt;p&gt;Consider the famous case of the &lt;strong&gt;Sony Walkman&lt;/strong&gt;. In the late 1970s, market research was explicitly clear: consumers did not want a cassette player that couldn&apos;t record. It was a &quot;broken&quot; product. The data was accurate regarding &lt;em&gt;current&lt;/em&gt; consumer expectations, but it was blind to &lt;em&gt;future&lt;/em&gt; potential.&lt;/p&gt;
&lt;p&gt;Akio Morita, Sony’s co-founder, famously ignored the research. He didn&apos;t have data; he had a &lt;strong&gt;conviction&lt;/strong&gt;. As noted in &lt;a href=&quot;https://innovationmanagement.se/2018/11/21/innovation-insights-from-the-founder-of-sony/&quot;&gt;innovation case studies&lt;/a&gt;, Morita believed that people loved music enough to want it with them everywhere, even if they couldn&apos;t articulate it yet. He made a unilateral decision to launch, stating, &lt;em&gt;&quot;The market research is in my head.&quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If he had waited for the data to validate his decision, the Walkman would never have existed.&lt;/p&gt;
&lt;h3&gt;Discovery is Not Taking Orders&lt;/h3&gt;
&lt;p&gt;Does this mean we should abandon research and just guess? Absolutely not.&lt;/p&gt;
&lt;p&gt;The mistake many organisations make is treating research as a way to ask customers for solutions. If you ask a user what they want, they will describe a slightly better version of what they already have (a faster horse).&lt;/p&gt;
&lt;p&gt;True &lt;strong&gt;Product Discovery&lt;/strong&gt; isn&apos;t about gathering requirements; it&apos;s about uncovering &lt;strong&gt;unknown opportunities&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Don&apos;t ask:&lt;/strong&gt; &quot;Would you buy a portable cassette player that doesn&apos;t record?&quot; (They will say no).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Do observe:&lt;/strong&gt; How often do people struggle to listen to music while travelling? How does music change their mood? Where are the gaps in their current experience?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We must utilise qualitative research, observation, and deep discovery to understand the &lt;strong&gt;Problem Space&lt;/strong&gt; intimately. But when it comes to the &lt;strong&gt;Solution Space&lt;/strong&gt;, we cannot expect the customer to lead us. That is our job.&lt;/p&gt;
&lt;h3&gt;Conviction vs. Validation&lt;/h3&gt;
&lt;p&gt;This brings us to the core conflict in modern product leadership: the battle between &lt;strong&gt;Validation&lt;/strong&gt; and &lt;strong&gt;Conviction&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Validation&lt;/strong&gt; is the act of asking the world to confirm you are right. It is external. It seeks safety in numbers. It is the tool of the optimiser.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Conviction&lt;/strong&gt; is the act of believing you are right before the world agrees. It is internal. It accepts the risk of being wrong in exchange for the chance to be transformative.&lt;/p&gt;
&lt;p&gt;This is why true innovation is rare. It isn&apos;t a resource problem; it&apos;s a &lt;strong&gt;courage&lt;/strong&gt; problem. When you lead with conviction, you strip away the protection of the spreadsheet. You are saying, &quot;The data doesn&apos;t see this yet, but I do.&quot; You are putting your reputation on the line.&lt;/p&gt;
&lt;h3&gt;Leading Through the Fog&lt;/h3&gt;
&lt;p&gt;We shouldn&apos;t discard data. We just need to understand its role.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Data&lt;/strong&gt; is for &lt;strong&gt;Navigation&lt;/strong&gt;: It tells you where you are and how to optimise your current path.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Conviction&lt;/strong&gt; is for &lt;strong&gt;Destination&lt;/strong&gt;: It tells you where you need to go, even when the path isn&apos;t visible.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The best product leaders are &lt;strong&gt;Conviction-Led&lt;/strong&gt; and &lt;strong&gt;Data-Informed&lt;/strong&gt;. They prioritise research to understand the human struggle, but they do not expect research to design the solution.&lt;/p&gt;
&lt;p&gt;To achieve something great, you have to accept that for a long time, the spreadsheet will look ugly. You have to be willing to walk down the hill to find the mountain.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;strong&gt;Homework:&lt;/strong&gt; &lt;em&gt;Look at your current roadmap. Are you only building things that you can prove will work? If so, you might be optimising your way to a dead end.&lt;/em&gt;&lt;/p&gt;
</content:encoded><category>Innovation</category><category>Product Strategy</category><category>Mental Models</category><category>Discovery</category><author>Steve James</author></item><item><title>The Critical Mass of Belief: Why We Biologically Resist the Shift to Product</title><link>https://stvpj.com/blog/resistance-change-kuhn-neuroscience/</link><guid isPermaLink="true">https://stvpj.com/blog/resistance-change-kuhn-neuroscience/</guid><description>Robert Langdon’s latest adventure highlights a truth Thomas Kuhn taught us decades ago: paradigms don&apos;t shift because of data; they shift because of critical mass. We explore why leaders cling to the illusion of predictability, and why the &apos;Product&apos; paradigm will eventually win.</description><pubDate>Wed, 31 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;The Curse of the &quot;Fixer&quot;&lt;/h2&gt;
&lt;p&gt;I have a confession to make: I am incapable of looking at a broken process and walking away.&lt;/p&gt;
&lt;p&gt;Throughout my career, this trait has been both a blessing and a curse. I’m the person who sees a software team drowning in bureaucracy, or a release cycle that takes six months for a one-line code change, and I have a pathological need to fix it. I’ve spent years working with organisations, trying to help them optimise their development practices, move away from rigid planning, and actually deliver value.&lt;/p&gt;
&lt;p&gt;But more often than I care to admit, I’ve hit a brick wall.&lt;/p&gt;
&lt;p&gt;I would present the data. I would show the cycle times. I would prove, mathematically, that the rigid need for predictability was damaging our ability to continuously deliver value. And yet, leadership would nod, agree, and then proceed to change absolutely nothing. It has always mystified me. Why, when presented with the solution to their problems, do smart people cling so tightly to the very systems that are sinking them?&lt;/p&gt;
&lt;h2&gt;The &quot;Aha&quot; Moment&lt;/h2&gt;
&lt;p&gt;I began to understand why in the most unexpected place.&lt;/p&gt;
&lt;p&gt;I was recently reading Dan Brown’s new novel, &lt;em&gt;The Secret of Secrets&lt;/em&gt;. I was trying to switch off from thinking about work, but I noticed an interesting passage that made me pause. Our protagonist, Robert Langdon, references the seminal work of philosopher &lt;strong&gt;Thomas Kuhn&lt;/strong&gt;, &lt;em&gt;&lt;a href=&quot;https://www.google.com/search?q=https://www.amazon.co.uk/Structure-Scientific-Revolutions-Thomas-Kuhn/dp/0226458121&quot;&gt;The Structure of Scientific Revolutions&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The book references Kuhn&apos;s argument that a paradigm-altering change cannot truly take place until it has reached a &quot;critical mass.&quot; Until that tipping point is reached, the old way of thinking doesn&apos;t just persist, it actively fights to survive to protect the status quo.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Normal science, for example, often suppresses fundamental novelties because they are necessarily subversive of its basic commitments.&quot; — &lt;em&gt;Thomas Kuhn&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It changed my perspective on the resistance I’ve faced in boardrooms. It wasn&apos;t about logic, and it wasn&apos;t about data. It was about the psychology of belief.&lt;/p&gt;
&lt;h2&gt;The Guardian&apos;s Dilemma&lt;/h2&gt;
&lt;p&gt;It is easy to vilify resistant leaders as stubborn or self-interested, but that is rarely the truth. Most leaders aren&apos;t resisting because they are trying to protect their jobs, they are resisting because they are trying to protect the &lt;strong&gt;company&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;These leaders view themselves as the guardians of the organisation&apos;s stability and growth. They are responsible for revenue, churn, and shareholder value. When we propose a shift to a &lt;strong&gt;Product Operating Model&lt;/strong&gt;, moving from &quot;fixed scope and dates&quot; to &quot;outcomes and experimentation&quot;, they don&apos;t hear &quot;agility.&quot; They hear &quot;risk.&quot;&lt;/p&gt;
&lt;p&gt;This reaction is rooted in &lt;strong&gt;Loss Aversion&lt;/strong&gt;. As explained by &lt;a href=&quot;https://www.wallstreetprep.com/knowledge/loss-aversion/&quot;&gt;Wall Street Prep&lt;/a&gt;, the psychological pain of a potential loss (e.g., a dip in revenue or a failed release) is twice as powerful as the pleasure of an equivalent gain.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Loss aversion refers to the tendency for people to prefer avoiding losses to acquiring equivalent gains... The pain of losing is psychologically about twice as powerful as the pleasure of gaining.&quot; — &lt;em&gt;Behavioural Economics Principle&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;To a conscientious leader, the &quot;known bad&quot; of the current model feels safer than the &quot;unknown good&quot; of the new one. They stick to the old ways not out of ignorance, but out of a genuine, albeit misplaced, duty to prevent irreparable damage to the metrics that matter.&lt;/p&gt;
&lt;h2&gt;The Biology of &quot;No&quot;&lt;/h2&gt;
&lt;p&gt;This protective instinct is reinforced by our biology. Organisations are made of human beings, and despite what we&apos;d like to believe, human beings are not data-driven machines, we are emotional, biological creatures.&lt;/p&gt;
&lt;p&gt;According to research on the &lt;strong&gt;neuroscience of change&lt;/strong&gt; by groups like &lt;a href=&quot;https://neurofied.com/the-psychology-of-resistance-to-change/&quot;&gt;Neurofied&lt;/a&gt; and &lt;a href=&quot;https://www.eighthmile.com.au/blog/neuroscience-of-change&quot;&gt;Eighth Mile Consulting&lt;/a&gt;, the brain is wired to perceive uncertainty not as an intellectual challenge, but as a physical threat. When a leader is told to embrace &quot;empiricism&quot; (where the outcome isn&apos;t known upfront), it triggers the &lt;strong&gt;amygdala&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This flight-or-fight response bypasses the logic centres of the prefrontal cortex. The resistance you face is a &lt;strong&gt;biological safety mechanism&lt;/strong&gt;. The brain favours the path of least resistance because it requires less caloric energy. This is &lt;strong&gt;Cognitive Inertia&lt;/strong&gt;: the tendency to persist in established mental models simply because they are established.&lt;/p&gt;
&lt;h2&gt;Identity-Protective Cognition&lt;/h2&gt;
&lt;p&gt;Beyond the fear of damaging the company, there is also the subtle force of &lt;strong&gt;Identity-Protective Cognition&lt;/strong&gt; (IPC).&lt;/p&gt;
&lt;p&gt;Yale researcher &lt;a href=&quot;https://culturalcognition.net/&quot;&gt;Dan Kahan’s work&lt;/a&gt; suggests that we process information to protect our standing within our &quot;tribe.&quot; If a Project Manager has spent 20 years mastering the art of the Gantt chart to give stakeholders a sense of certainty, they have built their professional identity on being &quot;The Predictor.&quot;&lt;/p&gt;
&lt;p&gt;To accept that &lt;strong&gt;predictability is impossible&lt;/strong&gt; in software is to admit that their primary skill set is no longer relevant.&lt;/p&gt;
&lt;p&gt;This triggers &lt;strong&gt;Identity-Protective Reasoning&lt;/strong&gt;. The more data you provide contradicting their worldview, the more they may double down on their original belief. It’s not just about keeping a job; it’s about maintaining their sense of professional worth.&lt;/p&gt;
&lt;h2&gt;The Age of Software: A Crisis of Paradigms&lt;/h2&gt;
&lt;p&gt;This brings us back to Kuhn and the current state of our industry.&lt;/p&gt;
&lt;p&gt;As &lt;strong&gt;Mik Kersten&lt;/strong&gt; argues in &lt;em&gt;&lt;a href=&quot;https://projecttoproduct.org/&quot;&gt;Project to Product&lt;/a&gt;&lt;/em&gt;, and &lt;strong&gt;Carlota Perez&lt;/strong&gt; maps in &lt;em&gt;&lt;a href=&quot;https://carlotaperez.org/books/&quot;&gt;Technological Revolutions and Financial Capital&lt;/a&gt;&lt;/em&gt;, we are currently in the &lt;strong&gt;Turning Point&lt;/strong&gt; of the Age of Software. The old paradigm is not the concept of a &quot;project&quot; itself, segmenting work is necessary. The old paradigm is the management model that demands we predict the scope, time, and cost of those segments before we have even begun.&lt;/p&gt;
&lt;p&gt;But the &lt;strong&gt;anomalies&lt;/strong&gt; are piling up. The failed digital transformations, the tech debt bankruptcies, and the inability of legacy banks to compete with digital natives, these are the cracks in the paradigm.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Technological revolutions are not just about new technology; they are about the new common sense... The old way of doing things no longer works, and the new way is not yet fully understood.&quot; — &lt;em&gt;Carlota Perez&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;The Inevitable End&lt;/h2&gt;
&lt;p&gt;So, does this biology of resistance mean I am giving up? Absolutely not.&lt;/p&gt;
&lt;p&gt;I will not stop pushing for this change. I will continue to highlight the flaws in the old system, continue to provide the data, and continue to build the psychological safety required for leaders to let go of the illusion of control. I am committed to helping this industry reach that tipping point.&lt;/p&gt;
&lt;p&gt;But I am also realistic. The &quot;Product&quot; way of thinking has been around for over two decades, yet we are still fighting for it. This suggests that the &lt;strong&gt;Critical Mass&lt;/strong&gt; of belief Kuhn described has not yet been achieved. While I will keep fighting, history suggests that if we cannot win the argument, the paradigm shift will happen in one of two ways:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. The Generational Shift (The Max Planck Route)&lt;/strong&gt;
As physicist Max Planck famously noted, &lt;em&gt;&quot;A new scientific truth does not triumph by convincing its opponents... but rather because its opponents eventually die, and a new generation grows up that is familiar with it.&quot;&lt;/em&gt; It is possible that we are simply waiting for the old guard to retire, making way for a generation that doesn&apos;t need to be convinced that the world is uncertain, because they&apos;ve never known it to be anything else.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. The Extinction Event (The Perez &amp;amp; Kersten Route)&lt;/strong&gt;
If organisations refuse to adapt, the market will make the decision for them. As Mik Kersten and Carlota Perez predict, we are exiting the installation phase of this technological revolution. The companies that cannot adopt the new &quot;common sense&quot; will simply fail, replaced by the digital natives and adaptive firms that embraced the new paradigm.&lt;/p&gt;
&lt;p&gt;I’m betting on the change. Whether it happens through enlightenment, retirement, or extinction is up to us.&lt;/p&gt;
</content:encoded><category>Change Management</category><category>Psychology</category><category>Thomas Kuhn</category><category>Product Operating Model</category><category>Neuroscience</category><category>Dan Brown</category><author>Steve James</author></item><item><title>AI Will Supercharge Your Value Stream, But Only If You Fix Your Architecture First</title><link>https://stvpj.com/blog/ai-value-stream-architecture/</link><guid isPermaLink="true">https://stvpj.com/blog/ai-value-stream-architecture/</guid><description>We often talk about AI writing code, but its real power lies in managing the flow of value. The catch? You can&apos;t apply a Ferrari engine to a horse-drawn cart. Here is why organisations must restructure to unlock the AI advantage.</description><pubDate>Tue, 16 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;The New Promise of Flow&lt;/h2&gt;
&lt;p&gt;When we talk about AI in software delivery, the conversation usually stops at the IDE. We obsess over GitHub Copilot writing boilerplate code or LLMs drafting unit tests.&lt;/p&gt;
&lt;p&gt;While these gains are real, they are &lt;strong&gt;local optimisations&lt;/strong&gt;. Making a developer type 20% faster doesn&apos;t mean the customer gets value 20% sooner, especially if that code sits in a queue for two weeks waiting for approval.&lt;/p&gt;
&lt;p&gt;The true revolution isn&apos;t in code generation; it is in &lt;strong&gt;Flow Intelligence&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;AI is arguably the first technology capable of seeing the &quot;invisible&quot; parts of a value stream. It promises to predict bottlenecks, quantify delays, and orchestrate context across teams. But before we get carried away with the technology, we have to acknowledge an uncomfortable truth: we have known how to fix delivery for years, and we still haven&apos;t done it.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Manuals We Ignored&lt;/h2&gt;
&lt;p&gt;The concept of the &lt;strong&gt;Value Stream&lt;/strong&gt; is not new. It has been a staple of Lean manufacturing for decades and has been discussed in software circles for nearly as long.&lt;/p&gt;
&lt;p&gt;The &quot;gold standard&quot; texts on this topic have been sitting on our bookshelves for years:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Mik Kersten’s&lt;/strong&gt; &lt;em&gt;&lt;a href=&quot;https://www.amazon.co.uk/Project-Product-survive-disruption-age/dp/1942788398&quot;&gt;Project to Product&lt;/a&gt;&lt;/em&gt; meticulously outlines why the project model fails in the Age of Software and provides the &lt;strong&gt;Flow Framework&lt;/strong&gt; to fix it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Donald Reinertsen’s&lt;/strong&gt; &lt;em&gt;&lt;a href=&quot;https://www.amazon.co.uk/Principles-Product-Development-Flow-Second/dp/1935401009&quot;&gt;The Principles of Product Development Flow&lt;/a&gt;&lt;/em&gt; provides the mathematical and economic foundation for managing queues, batch sizes, and &lt;strong&gt;Cost of Delay&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Despite this wealth of knowledge, true adoption has been poor. Most organisations still pay lip service to &quot;Agile&quot; while maintaining rigid, project-based governance structures.&lt;/p&gt;
&lt;p&gt;What &lt;em&gt;is&lt;/em&gt; new, however, is that AI has changed the calculus. Previously, implementing Flow required immense manual effort to gather data and police processes. Now, AI offers a game-changing capability to make Flow visible, predictive, and actionable, but only if the underlying streams are constructed correctly.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;A Reminder: What Do We Mean by Value Streams and Flow?&lt;/h2&gt;
&lt;p&gt;Before we look at the potential AI advantage, it is worth reminding ourselves of the fundamentals.&lt;/p&gt;
&lt;p&gt;In system development, a &lt;strong&gt;Value Stream&lt;/strong&gt; is simply the sequence of activities required to deliver a product or service to a customer. It starts when a request is made (or a market opportunity is spotted) and ends only when value is realized in the hands of the user.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Flow&lt;/strong&gt; is the measure of how easily work moves through that stream.
In a healthy system:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Flow Velocity&lt;/strong&gt; is high (value is delivered frequently).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flow Efficiency&lt;/strong&gt; is high (work isn&apos;t sitting in wait states).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flow Load&lt;/strong&gt; is managed (teams aren&apos;t drowning in WIP).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you view your organisation through this lens, you stop managing people (resources) and start managing the work itself.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;How AI Supercharges the Stream&lt;/h2&gt;
&lt;p&gt;If you have a defined Value Stream, AI becomes the ultimate accelerator.&lt;/p&gt;
&lt;h3&gt;1. Predictive Visibility&lt;/h3&gt;
&lt;p&gt;AI agents can ingest signals from disparate tools (Jira, GitHub, Slack) to construct a high-fidelity map of your stream. It can analyse &lt;strong&gt;Flow Load&lt;/strong&gt; and historical throughput to predict bottlenecks before they form.&lt;/p&gt;
&lt;h3&gt;2. Dynamic Cost of Delay&lt;/h3&gt;
&lt;p&gt;Reinertsen teaches us that Cost of Delay is the key to economic decision-making. AI can finally make this practical. Instead of a theoretical debate, an AI agent can flag a stalled feature and quantify exactly how much potential revenue is being lost every hour it sits in &quot;Ready for QA.&quot;&lt;/p&gt;
&lt;h3&gt;3. Context Orchestration&lt;/h3&gt;
&lt;p&gt;AI can carry the &quot;context payload&quot; through the stream. It can synthesize user research from the discovery phase directly into acceptance criteria for delivery, ensuring the &quot;why&quot; never gets lost as the work flows to the &quot;how&quot;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why You Aren&apos;t Ready for AI-Driven Flow&lt;/h2&gt;
&lt;p&gt;You cannot supercharge a system that is fundamentally broken. The reason most organisations cannot leverage AI for flow optimisation isn&apos;t a lack of tools; it is a lack of &lt;strong&gt;autonomy&lt;/strong&gt; and &lt;strong&gt;structural alignment&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;The &quot;Feature as a Project&quot; Trap&lt;/h3&gt;
&lt;p&gt;Many organisations have stable teams, they don&apos;t fire and re-hire developers for every release. However, they still force these stable teams to operate within a &lt;strong&gt;Project Funding model&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Instead of funding a continuous stream of value, the business funds a &quot;Feature Release&quot; as a distinct project. As Kersten argues, this forces teams to batch work into massive, high-risk releases to justify the budget.&lt;/p&gt;
&lt;p&gt;AI thrives on small batch sizes. If your governance model forces you to stockpile six months of code into one &quot;Big Bang&quot; release, AI cannot help you. It can predict the date you will miss, but it cannot fix the risk inherent in the batch size.&lt;/p&gt;
&lt;h3&gt;Architecture That Blocks Autonomy&lt;/h3&gt;
&lt;p&gt;Even if you have a &quot;Product Team,&quot; do they actually own the full vertical slice of their value stream?&lt;/p&gt;
&lt;p&gt;Often, a team is responsible for a product but lacks the &lt;strong&gt;architectural autonomy&lt;/strong&gt; to deliver it. They might own the application code but depend on a central &quot;Platform Team&quot; for infrastructure, a &quot;Data Team&quot; for schema changes, and a &quot;QA Team&quot; for testing.&lt;/p&gt;
&lt;p&gt;These dependencies are flow killers. If a team cannot deploy what they build without three other teams signing off, the value stream is severed. AI can visualize this blockage, but it cannot write the code to decouple your monolith.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What do we do then?&lt;/h2&gt;
&lt;p&gt;AI offers us the chance to finally solve the &quot;visibility problem&quot; in software delivery. It promises to turn the lights on in the factory, revealing exactly where value is leaking.&lt;/p&gt;
&lt;p&gt;But technology amplifies the underlying habits of an organisation. If you apply AI to a well-architected value stream, you can achieve unprecedented speed and adaptability. If you apply it to a bureaucracy of gated releases and dependencies, you will just hit the wall faster.&lt;/p&gt;
&lt;p&gt;The blueprints for success, written by Kersten and Reinertsen—have been available for years. AI is simply the urgent wake-up call to finally read them.&lt;/p&gt;
</content:encoded><category>AI</category><category>Value Streams</category><category>Flow Framework</category><category>Product Management</category><category>Architecture</category><category>Systems Thinking</category><author>Steve James</author></item><item><title>A Product Management Manifesto for the AI Era</title><link>https://stvpj.com/blog/product-management-manifesto-ai/</link><guid isPermaLink="true">https://stvpj.com/blog/product-management-manifesto-ai/</guid><description>Agile won the war on delivery, but we are losing the peace on value. It is time for a new set of values that prioritises outcomes over outputs and learning over logistics.</description><pubDate>Mon, 08 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;The Agile Manifesto Won the War on Delivery. Now We Need One for Value.&lt;/h2&gt;
&lt;p&gt;In 2001, seventeen software engineers met in a ski lodge in Utah and changed the world. The &lt;a href=&quot;https://agilemanifesto.org/&quot;&gt;Agile Manifesto&lt;/a&gt; was a rebellion against heavy-handed documentation and rigid planning. It was necessary, it was effective, and it fundamentally solved the problem of &lt;em&gt;how to build software&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;But twenty-five years later, we face a different problem.&lt;/p&gt;
&lt;p&gt;Agile frameworks have been so successful that &quot;delivery&quot; is often commoditised. We can ship code faster than ever. And now, with the arrival of Generative AI, the cost of producing code, content, and features is trending toward zero.&lt;/p&gt;
&lt;p&gt;The bottleneck has shifted. The challenge today isn&apos;t &lt;em&gt;can we build it?&lt;/em&gt; The challenge is &lt;em&gt;should we build it?&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;A Return to Essence, Not a New Definition&lt;/h3&gt;
&lt;p&gt;To be clear: &lt;strong&gt;Prioritising value over volume has &lt;em&gt;always&lt;/em&gt; been the true role of a Product Manager.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The best Product Managers have always understood that velocity is a metric for output, not outcomes. They knew that you could double your speed and still build something nobody wanted.&lt;/p&gt;
&lt;p&gt;However, for the last two decades, the sheer difficulty of software delivery gave the industry an excuse. It was easy to drift into the role of &quot;Backlog Administrator&quot;, spending days writing tickets, managing Jira, and &quot;feeding the beast&quot; of engineering. It was dysfunctional, but it was considered necessary work.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;That excuse is now gone.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In the AI era, the &quot;tactical ladder&quot; of ticket writing, summarising interviews, and basic analysis is being automated. If your primary contribution is moving tickets around a board, an agent will soon do it faster and cheaper.&lt;/p&gt;
&lt;p&gt;This manifesto is not about inventing a new job. It is about stripping away the administrative safety blanket and focusing on the only thing that has ever really mattered: &lt;strong&gt;judgement, strategy, and the definition of value.&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Shift: From Delivery to Value&lt;/h2&gt;
&lt;p&gt;We respect the original four values. They ground us in the reality of software creation:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Individuals and interactions&lt;/strong&gt; over processes and tools&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Working software&lt;/strong&gt; over comprehensive documentation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Customer collaboration&lt;/strong&gt; over contract negotiation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Responding to change&lt;/strong&gt; over following a plan&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;But for Product Managers – especially those grappling with the speed and uncertainty of AI – we need to overlay a new set of priorities.&lt;/p&gt;
&lt;h3&gt;1. Customer &amp;amp; Outcome Learning over Backlog Administration&lt;/h3&gt;
&lt;p&gt;The most valuable time a PM spends is with the problem, not the ticket. We must get close to customers, frame the outcome, and decide how we will measure success. Backlog hygiene matters, but only as a means to an end. We must pair &lt;strong&gt;product outcomes&lt;/strong&gt; (activation, retention, NRR) with &lt;strong&gt;delivery health&lt;/strong&gt; (lead time, deployment frequency) to tell the complete story: &quot;We shipped this, and it improved that&quot;.
&lt;em&gt;Refs: &lt;a href=&quot;https://dora.dev/guides/dora-metrics-four-keys/&quot;&gt;DORA&lt;/a&gt;, &lt;a href=&quot;https://itrevolution.com/product/accelerate/&quot;&gt;Accelerate&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;2. Validated Value over Shipped Features&lt;/h3&gt;
&lt;p&gt;A feature isn’t valuable because it shipped; it’s valuable because it changed user behaviour. In the AI era, this is doubly true. &quot;Done&quot; is no longer just code in production; it includes clean data, pre-agreed evaluation thresholds, and safety guardrails. We value the &lt;em&gt;evidence of impact&lt;/em&gt; more than the &lt;em&gt;volume of output&lt;/em&gt;.
&lt;em&gt;Refs: &lt;a href=&quot;https://arxiv.org/abs/1810.03993&quot;&gt;Model Cards&lt;/a&gt;, &lt;a href=&quot;https://www.productcompass.pm/p/ai-prd-template&quot;&gt;AI PRD&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;3. Continuous Collaboration over Periodic Alignment&lt;/h3&gt;
&lt;p&gt;Quarterly planning is too slow for modern product work. We value a steady, &quot;little and often&quot; rhythm of orchestration between customers, engineering, data science, and compliance. We prefer short discovery loops and frequent, transparent updates over &quot;big bang&quot; alignment meetings that are obsolete the moment they finish.
&lt;em&gt;Refs: &lt;a href=&quot;https://www.svpg.com/inspired-how-to-create-products-customers-love/&quot;&gt;INSPIRED&lt;/a&gt;, &lt;a href=&quot;https://itrevolution.com/product/project-to-product/&quot;&gt;Project to Product&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;4. Adaptive Strategy over Fixed Roadmaps&lt;/h3&gt;
&lt;p&gt;Strategy is a set of bets, not a set of certainties. In a world of fat-tail risks and rapidly evolving AI capabilities, assumptions rot quickly. We treat the roadmap as a living hypothesis tied to outcomes, reserving the right to adjust scope and sequence as we learn. We value the &lt;strong&gt;agility of our direction&lt;/strong&gt; more than the &lt;strong&gt;fidelity of our Gantt chart&lt;/strong&gt;.
&lt;em&gt;Ref: &lt;a href=&quot;https://erikbern.com/2019/04/15/why-software-projects-take-longer-than-you-think-a-statistical-model.html&quot;&gt;Fat-tail uncertainty&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;12 Principles for Modern Product Management&lt;/h2&gt;
&lt;p&gt;If those are the values, how do we live them? Here are 12 principles to guide the daily work of a Product Manager in 2025.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. Start with problems, frame outcomes.&lt;/strong&gt;
Begin every initiative by writing the customer problem in plain English. Identify who is affected and how you will measure &quot;better.&quot; If you can&apos;t articulate the success metric – activation goes up, friction comes down – you aren&apos;t ready to build.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. Flow and Value are inseparable.&lt;/strong&gt;
Engineering capability and commercial results are linked. Make this visible. Review &lt;strong&gt;DORA metrics&lt;/strong&gt; (how fast we ship) alongside &lt;strong&gt;product KPIs&lt;/strong&gt; (how much value we create). When delivery slows, outcomes suffer. You cannot manage one without the other.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. Decide with evidence, not opinion.&lt;/strong&gt;
Replace long debates with small tests: a prototype, a concierge trial, a fake-door prompt. Agree on the &lt;strong&gt;thresholds&lt;/strong&gt; before you test – know what &quot;pass&quot; and &quot;fail&quot; look like before the data comes in to prevent post-hoc rationalisation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4. Shape small bets; sequence by risk.&lt;/strong&gt;
Break work into thin slices that prove the riskiest assumptions first. Whether it’s legal constraints, API latency, or user willingness-to-pay, tackle the &quot;unknowns that can kill you&quot; before polishing the UI.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;5. Make AI &quot;Definition of Done++&quot;.&lt;/strong&gt;
If you are building with AI, &quot;done&quot; means more than &quot;it works on my machine.&quot; It requires documented data lineage, clear offline benchmarks, online success metrics, and safety guardrails. Operations should be boring – write the docs to keep them that way.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;6. Organise by value streams, not projects.&lt;/strong&gt;
Fund durable teams that own a customer outcome end-to-end. Avoid short-lived projects that treat teams like mercenaries. When a team owns the long-term result, they make better long-term decisions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;7. Roadmaps tell the story, not the secrets.&lt;/strong&gt;
Use &quot;Now/Next/Later&quot; to signal direction without locking yourself into a lie. Stakeholders need to know the &lt;em&gt;goals&lt;/em&gt; and the &lt;em&gt;guardrails&lt;/em&gt;, but the specific features should emerge from discovery. Build trust through transparency, not false certainty.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;8. Prioritise capabilities over ceremony.&lt;/strong&gt;
The best teams aren&apos;t faster because they have better meetings; they are faster because they invest in architecture, automated testing, and feature flags. Invest in the machinery that reduces the cost of experimentation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;9. Design for responsible impact.&lt;/strong&gt;
Users judge products by how they behave on their worst day. Bake privacy, safety, and explainability into discovery. Ask: &quot;What is the worst plausible misuse of this?&quot; and build the mitigation before you launch.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;10. Make trade-offs explicit.&lt;/strong&gt;
Every choice has a cost. Write down the options considered (e.g., Build vs Buy, Fine-tune vs Prompt Engineering) and why you chose your path. A short &quot;Decision Record&quot; saves months of re-litigation later.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;11. Teach the organisation to learn.&lt;/strong&gt;
Hold short reviews that focus on &lt;em&gt;what we learned&lt;/em&gt;, not just &lt;em&gt;what we did&lt;/em&gt;. Celebrate the null results – experiments that failed early save the company money. Share the learning, not just the launch.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;12. Lead with clarity and kindness.&lt;/strong&gt;
PMs often lack formal authority, so our superpower is shared understanding. Be precise with language, generous with credit, and assume good intent. People follow leaders who make the complex feel manageable.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Steward of &quot;Why&quot;&lt;/h2&gt;
&lt;p&gt;The original Agile Manifesto was a reaction to a time of scarcity, where shipping software was difficult, expensive, and slow. Its principles were designed to remove friction.&lt;/p&gt;
&lt;p&gt;Today, we face a crisis of abundance. AI and modern tooling have made the act of creation easier than ever. The barriers to entry have collapsed. But this ease of creation brings a new danger: the ability to build the wrong things, faster and with more confidence than ever before.&lt;/p&gt;
&lt;p&gt;This is the turning point for Product Management. The tactical ladder of ticket-writing and backlog administration is disappearing. What remains is the harder, more human work: judgement, orchestration, ethics, and the courage to stop a feature that adds no value.&lt;/p&gt;
&lt;p&gt;We must stop pretending we can predict the future with Gantt charts and instead build the capability to respond to it. We must move from being the managers of schedules to the stewards of value.&lt;/p&gt;
&lt;p&gt;That is the manifesto for the next age. Not just to work faster, but to work with purpose.&lt;/p&gt;
</content:encoded><category>Product Management</category><category>Agile</category><category>AI</category><category>Strategy</category><category>Product Ops</category><author>Steve James</author></item><item><title>Agile Governance: Changing the Rules Without Losing Control</title><link>https://stvpj.com/blog/agile-governance/</link><guid isPermaLink="true">https://stvpj.com/blog/agile-governance/</guid><description>Agile does not remove the need for governance. It changes how governance works so that it supports the flow of value, and must be tailored to each organisation rather than copied from traditional project management.</description><pubDate>Thu, 20 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When organisations talk about &quot;going agile&quot;, governance is often the awkward topic that everyone sidesteps.&lt;/p&gt;
&lt;p&gt;Leaders worry that if they relax traditional controls, work will go off the rails. Teams worry that if they keep the old controls, nothing will really change and the agile transformation will remain a set of rituals on top of business as usual.&lt;/p&gt;
&lt;p&gt;The key point is simple:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Agile ways of working do not remove the need for governance.&lt;br /&gt;
They change how governance is designed so that it supports the flow of value rather than blocking it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;To get this right, you cannot just bolt an agile framework like Scrum or SAFe onto a traditional project management model. The real work is to apply &lt;strong&gt;agile principles&lt;/strong&gt; to the way you govern. That means shaping governance around your own culture, risk appetite, and constraints. What works in one organisation may fail completely in another.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Agile Is More Than Scrum&lt;/h2&gt;
&lt;p&gt;I always feel that it&apos;s important to remind people that Agile doesn&apos;t necessarily mean Scrum. It is very easy to fall into the trap of thinking that &quot;agile&quot; means Scrum ceremonies and story points.&lt;/p&gt;
&lt;p&gt;Agile refers to a set of principles and values. Frameworks like Scrum, Kanban, XP or SAFe are simply different ways of applying those principles in context.&lt;/p&gt;
&lt;p&gt;Some of the core agile principles that matter for governance are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Prioritising outcomes and customer value over internal activity&lt;/li&gt;
&lt;li&gt;Welcoming change because learning is continuous&lt;/li&gt;
&lt;li&gt;Delivering in small increments so that risk and uncertainty are reduced&lt;/li&gt;
&lt;li&gt;Working in close collaboration with stakeholders and users&lt;/li&gt;
&lt;li&gt;Building sustainable pace into the way teams deliver&lt;/li&gt;
&lt;li&gt;Making decisions based on feedback from real use, not just plans&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If governance is not aligned with these principles, it does not matter which framework you choose. You will still have friction.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What Governance Is Really For&lt;/h2&gt;
&lt;p&gt;Governance is not a dirty word. It is simply a structured way of making decisions and providing oversight.&lt;/p&gt;
&lt;p&gt;At the delivery level, governance exists to answer questions like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Are we investing in the right things&lt;/li&gt;
&lt;li&gt;Are we delivering value at an acceptable pace&lt;/li&gt;
&lt;li&gt;Are we managing risk in a responsible way&lt;/li&gt;
&lt;li&gt;Are we learning and adapting as we go&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are good questions. The problem is not the intent. The problem is the way many organisations have answered them, usually through heavy, document driven processes and a stage gate mindset.&lt;/p&gt;
&lt;p&gt;In traditional project management, that typically means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Large batches of requirements defined up front&lt;/li&gt;
&lt;li&gt;A detailed plan that is expected to hold for months or years&lt;/li&gt;
&lt;li&gt;A sequence of gates where documents are reviewed and signed off&lt;/li&gt;
&lt;li&gt;A strong focus on time, scope, and budget as success criteria&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This model optimises for predictability in a stable environment. Modern digital product work lives in an unstable environment and is full of uncertainty. The old model clashes with that reality.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;When Agile Principles Meet Traditional Governance&lt;/h2&gt;
&lt;p&gt;Agile delivery, regardless of framework, assumes that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You will not know everything up front&lt;/li&gt;
&lt;li&gt;You will learn as you go and adjust direction&lt;/li&gt;
&lt;li&gt;Smaller, more frequent releases are safer than big bangs&lt;/li&gt;
&lt;li&gt;Teams need local decision making authority&lt;/li&gt;
&lt;li&gt;Feedback loops should be short and continuous&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Traditional governance often assumes the opposite. The result is friction at several levels.&lt;/p&gt;
&lt;h3&gt;1. Up front certainty vs emergent learning&lt;/h3&gt;
&lt;p&gt;Traditional governance likes fixed scope, fixed dates, and fixed budgets presented early in the process. Agile principles accept that scope will evolve, that dates will be refined as we learn, and that investment should follow evidence of value.&lt;/p&gt;
&lt;p&gt;When governance insists on a detailed, frozen plan, product teams end up writing documents for gate approval while the real decisions live in backlogs, roadmaps, and conversations.&lt;/p&gt;
&lt;h3&gt;2. Big, infrequent decisions vs continuous decision making&lt;/h3&gt;
&lt;p&gt;Stage gates condense decision making into occasional large events with big packs and formal sign offs.&lt;/p&gt;
&lt;p&gt;Agile principles favour many small decisions. Teams refine what to build next, stakeholders react to real increments of working software or service, and priorities shift based on new information.&lt;/p&gt;
&lt;p&gt;When you try to run both models in parallel, either the agile decisions are slowed down by the gate schedule, or the gates become rubber stamps that add cost without real control.&lt;/p&gt;
&lt;h3&gt;3. Centralised sign off vs empowered product ownership&lt;/h3&gt;
&lt;p&gt;Traditional approaches rely heavily on committees, steering groups, and central approval bodies.&lt;/p&gt;
&lt;p&gt;Agile principles expect clear, empowered roles. A Product Owner or similar role is accountable for value and priorities. The delivery team is accountable for quality and execution. Decisions are made as close to the work as possible, within clear organisational constraints.&lt;/p&gt;
&lt;p&gt;When every meaningful decision still has to go back to a board, you remove the very autonomy that agile depends on. The result is delay, frustration, and a lot of duplicated storytelling.&lt;/p&gt;
&lt;h3&gt;4. Slowing change vs flowing change&lt;/h3&gt;
&lt;p&gt;A classic risk response is to add more controls and approvals. If something went wrong in the past, the instinct is to require an extra sign off or another checklist.&lt;/p&gt;
&lt;p&gt;Agile principles and modern engineering practice take a different approach. They reduce risk by making changes smaller, more frequent, and better tested. Automated tests, continuous integration, deployment pipelines, and peer review increase the safety of change without slowing it down.&lt;/p&gt;
&lt;p&gt;If governance is still built on the idea that fewer, larger releases are safer, it will continually fight the engineering practices that actually reduce risk in complex systems.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What Agile Governance Looks Like In Practice&lt;/h2&gt;
&lt;p&gt;Agile governance is not the removal of control. It is the redesign of control so that it lives inside the way work is done and supports the flow of value.&lt;/p&gt;
&lt;p&gt;Some common elements of agile aligned governance are:&lt;/p&gt;
&lt;h3&gt;1. Clear decision rights based on roles&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Product roles (Product Owner, Product Manager) are given real authority over what gets built and in what order&lt;/li&gt;
&lt;li&gt;Teams are given autonomy over how they deliver, within constraints on quality, security, and compliance&lt;/li&gt;
&lt;li&gt;Risk, legal, finance, and other control functions have defined points of engagement, and their input is integrated into the regular rhythm of delivery rather than handled as a separate track&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Everyone knows who decides what and on what basis.&lt;/p&gt;
&lt;h3&gt;2. Self organising teams with explicit boundaries&lt;/h3&gt;
&lt;p&gt;Self organisation does not mean chaos. It means teams are trusted to organise their work to meet agreed goals.&lt;/p&gt;
&lt;p&gt;Governance defines the boundaries. For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Non negotiable standards for security, privacy, and architecture&lt;/li&gt;
&lt;li&gt;Expectations for testing, documentation, and support&lt;/li&gt;
&lt;li&gt;Release practices, such as the need for automated checks and rollback plans&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Within those boundaries, teams have freedom. Governance is expressed as guardrails, not detailed instructions.&lt;/p&gt;
&lt;h3&gt;3. Built in inspection and adaptation&lt;/h3&gt;
&lt;p&gt;Agile teams use a variety of regular events and feedback loops, for example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Short planning cycles&lt;/li&gt;
&lt;li&gt;Frequent reviews or demos of working software or service&lt;/li&gt;
&lt;li&gt;Regular retrospectives that look at process, not just output&lt;/li&gt;
&lt;li&gt;Continuous monitoring of systems in production&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are not just team rituals. They are governance mechanisms. They create transparency and trigger adaptation. The presence and quality of these practices can be a formal governance concern.&lt;/p&gt;
&lt;h3&gt;4. Technical practices as controls&lt;/h3&gt;
&lt;p&gt;Definitions of Ready and Done, code review, automated tests, and deployment pipelines are all part of governance.&lt;/p&gt;
&lt;p&gt;Instead of asking &quot;Did you fill in this template&quot;, governance can ask:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Do we have a reliable way to prove that this change is safe&lt;/li&gt;
&lt;li&gt;Can we deploy and roll back quickly&lt;/li&gt;
&lt;li&gt;Is the environment observable enough that we can detect and respond to issues fast&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These questions are more directly connected to risk than a sign off signature on a slide deck.&lt;/p&gt;
&lt;h3&gt;5. Governance that consumes real artefacts&lt;/h3&gt;
&lt;p&gt;In an agile setting, the most reliable sources of truth are living artefacts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Product backlogs and roadmaps&lt;/li&gt;
&lt;li&gt;Delivery metrics and flow measures&lt;/li&gt;
&lt;li&gt;Pipeline status and test results&lt;/li&gt;
&lt;li&gt;Observability dashboards and incident reports&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Governance bodies should review these real artefacts, not separate status packs that summarise them weeks later. That keeps the conversation grounded in reality.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;There Is No One Size Fits All Model&lt;/h2&gt;
&lt;p&gt;A critical point that often gets overlooked is that agile governance is not a single template you can copy from another organisation.&lt;/p&gt;
&lt;p&gt;Agile principles are universal. How you realise them in your context is not.&lt;/p&gt;
&lt;p&gt;Different organisations have different:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cultures and levels of trust&lt;/li&gt;
&lt;li&gt;Regulatory obligations and risk profiles&lt;/li&gt;
&lt;li&gt;Legacy systems and technical constraints&lt;/li&gt;
&lt;li&gt;Talent profiles and experience levels&lt;/li&gt;
&lt;li&gt;Business models and strategic horizons&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A bank with heavy regulatory requirements will design governance differently from a start up that is still exploring product market fit. Both can be agile in intent, but their controls will look very different.&lt;/p&gt;
&lt;p&gt;Trying to copy a governance model from another organisation without adapting it to your own context usually fails. People go through the motions, but either risk is not managed properly or delivery slows to a crawl.&lt;/p&gt;
&lt;p&gt;The right question is not &quot;What is the best practice governance model&quot;. The right question is &quot;How do we apply agile principles to governance for our organisation, given our culture and constraints&quot;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why Agile Governance Cannot Sit On Top Of Traditional Project Management&lt;/h2&gt;
&lt;p&gt;If agile principles are driving your delivery model, and traditional project management is driving your governance model, you are effectively running two operating systems at once.&lt;/p&gt;
&lt;p&gt;They optimise for different outcomes:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Traditional project governance&lt;/th&gt;
&lt;th&gt;Agile aligned governance&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Optimises for adherence to plan&lt;/td&gt;
&lt;td&gt;Optimises for learning, flow, and outcomes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Assumes scope can be known and fixed up front&lt;/td&gt;
&lt;td&gt;Assumes scope will evolve as we learn&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prefers large, infrequent releases&lt;/td&gt;
&lt;td&gt;Prefers small, frequent, reversible changes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Centralises decision making at formal gates&lt;/td&gt;
&lt;td&gt;Distributes decision making within clear boundaries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Treats change as something to minimise&lt;/td&gt;
&lt;td&gt;Treats change as normal and expected&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Measures success by time, cost, and scope&lt;/td&gt;
&lt;td&gt;Measures success by outcomes and impact&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;You can already see the conflicts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The project plan says scope is fixed, the product team wants flexibility&lt;/li&gt;
&lt;li&gt;The gate calendar says decisions happen monthly, the team wants to deploy daily&lt;/li&gt;
&lt;li&gt;Governance asks for a big pack, the team points to live dashboards and backlogs&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you persist with this split brain model, you usually get the worst of both worlds. Delivery slows down, and the apparent control is more illusion than reality.&lt;/p&gt;
&lt;p&gt;For agile governance to be effective, you have to let agile principles shape the governance model itself. You cannot treat governance as an untouched layer that sits above delivery.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Designing Governance For Flow Of Value&lt;/h2&gt;
&lt;p&gt;So how do you start to redesign governance in a way that supports agile delivery and still respects your responsibilities as an organisation&lt;/p&gt;
&lt;p&gt;Here are some practical starting points.&lt;/p&gt;
&lt;h3&gt;1. Anchor governance in outcomes and value&lt;/h3&gt;
&lt;p&gt;Make business and customer outcomes the centre of the conversation. Ask:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What problems are we solving&lt;/li&gt;
&lt;li&gt;How will we know if we have made things better&lt;/li&gt;
&lt;li&gt;How quickly can we detect that something is not working&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let these questions guide investment and stop or pivot decisions.&lt;/p&gt;
&lt;h3&gt;2. Define clear accountabilities and decision rights&lt;/h3&gt;
&lt;p&gt;Map out who is accountable for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Product direction and priorities&lt;/li&gt;
&lt;li&gt;Delivery approach and quality&lt;/li&gt;
&lt;li&gt;Risk management and compliance input&lt;/li&gt;
&lt;li&gt;Financial oversight&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Create simple, explicit rules about what teams can decide independently and what must be escalated or shared.&lt;/p&gt;
&lt;h3&gt;3. Use flow and quality metrics instead of document counts&lt;/h3&gt;
&lt;p&gt;Track things like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Lead time from idea to production&lt;/li&gt;
&lt;li&gt;Deployment frequency&lt;/li&gt;
&lt;li&gt;Change failure rate and time to restore&lt;/li&gt;
&lt;li&gt;Defect trends and incident patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These metrics give a much clearer picture of health than the number of documents produced.&lt;/p&gt;
&lt;h3&gt;4. Approve capabilities, not individual changes&lt;/h3&gt;
&lt;p&gt;Instead of signing off every release in a committee, invest in the capabilities that make changes safe:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Automated tests at multiple levels&lt;/li&gt;
&lt;li&gt;Reliable pipelines&lt;/li&gt;
&lt;li&gt;Strong peer review practices&lt;/li&gt;
&lt;li&gt;Good observability&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once those are in place and monitored, allow teams to release within defined boundaries.&lt;/p&gt;
&lt;h3&gt;5. Tailor governance to your context and iterate&lt;/h3&gt;
&lt;p&gt;Start with a light, simple governance model that applies agile principles in your environment. Put it in writing, but keep it short and clear.&lt;/p&gt;
&lt;p&gt;Then treat that model as a product:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Inspect it regularly&lt;/li&gt;
&lt;li&gt;Gather feedback from teams, stakeholders, and control functions&lt;/li&gt;
&lt;li&gt;Adapt it when you see bottlenecks, misunderstandings, or blind spots&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Governance itself should evolve through experiments and learning, just like your products do.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Wow, you&apos;ve made it this far?&lt;/h2&gt;
&lt;p&gt;So, to summarise...&lt;/p&gt;
&lt;p&gt;Governance and agile delivery are not enemies. They are only in conflict when governance is frozen in a world that no longer exists.&lt;/p&gt;
&lt;p&gt;To make governance work in an agile environment:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Think in terms of principles, not frameworks&lt;/li&gt;
&lt;li&gt;Let those principles shape governance, not just delivery rituals&lt;/li&gt;
&lt;li&gt;Tailor governance to your culture, your risks, and your constraints&lt;/li&gt;
&lt;li&gt;Focus on enabling flow of value, not simply enforcing a plan&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You cannot just &quot;run agile&quot; inside the walls of a traditional project management model and hope it all fits together. You need a governance approach that is just as thoughtfully designed and just as adaptive as the teams it aims to guide.&lt;/p&gt;
</content:encoded><category>Agile</category><category>Governance</category><category>Product Management</category><category>Delivery</category><author>Steve James</author></item><item><title>Product Management at the AI Turning Point: Why the Next Age Demands a New Role</title><link>https://stvpj.com/blog/ai-pm-turning-point/</link><guid isPermaLink="true">https://stvpj.com/blog/ai-pm-turning-point/</guid><description>AI is reshaping not just the products we build but the very nature of work itself. For Product Managers, this turning point demands a shift in mindset, moving beyond execution to strategy, ethics, and orchestration.</description><pubDate>Sun, 28 Sep 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;Intro&lt;/h2&gt;
&lt;p&gt;Every few decades, the world crosses a &lt;strong&gt;turning point&lt;/strong&gt;, a moment when old paradigms no longer work and entire industries, organisational models, and roles are upended. In &lt;em&gt;Technological Revolutions and Financial Capital&lt;/em&gt;, Carlota Perez maps how each major innovation wave (steam, railways, electricity, mass production, information) passes through an &lt;strong&gt;Installation Period&lt;/strong&gt;, then a crisis-ridden &lt;strong&gt;Turning Point&lt;/strong&gt;, and finally a &lt;strong&gt;Deployment Period&lt;/strong&gt; when the new paradigm becomes the operating system of the economy. Companies that reorganise around the new paradigm survive, those that cling to the old rules fade. &lt;a href=&quot;https://www.amazon.co.uk/Technological-Revolutions-Financial-Capital-Dynamics/dp/1843763311&quot;&gt;Buy Perez (Amazon UK)&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Mik Kersten’s &lt;em&gt;Project to Product&lt;/em&gt; extends this lens to modern software and beyond. He argues that at the turning point, organisations must rethink how they create value, not by adding a tool or two, but by overhauling how work is organised, funded, and measured. As he summarises (paraphrased), at the turning point, businesses either master the new means of production or become relics of the last age. &lt;a href=&quot;https://www.amazon.co.uk/Project-Product-Networks-Transform-Business/dp/1942788398&quot;&gt;Buy Kersten (Amazon UK)&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It’s not surprising then that this is dominating conversation. A quick Google search for “Product Management and AI” returns tens of thousands of results across blogs, newsletters, and LinkedIn posts. The sheer volume of writing on the subject is a signal that product leaders everywhere are grappling with the implications of this paradigm shift, wondering what parts of their role will endure and which will be automated away.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AI is reshaping not just the products we build but the very nature of work itself. For Product Managers, this turning point demands a shift in mindset, moving beyond execution to strategy, ethics, and orchestration.&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;1) Turning points mean overhaul, not tweaks&lt;/h2&gt;
&lt;p&gt;Perez shows each revolution reconfigures the economy’s rules: capital flows, institutions, and the organisation of production all change. Companies that handled electricity like “just a new machine” lost to those that redesigned factories, supply chains, and skills for continuous, scalable power. The lesson carries forward: &lt;strong&gt;you don’t “install” a revolution, you rebuild around it.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Carlota Pérez puts it starkly:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Societies are profoundly shaken and shaped by each technological revolution and, in turn, the technological potential is shaped and steered as a result of intense social, political and ideological confrontations and compromises.” — Carlota Pérez&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is the scale of what we face with AI. It is not a feature set, it is a societal shift that will demand structural responses.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;2) Why the AI turning point is different&lt;/h2&gt;
&lt;p&gt;In prior ages, machines replaced muscle, digitisation replaced paper, and software scaled logic. AI now automates and scales &lt;strong&gt;analysis, drafting, synthesis, and decision support&lt;/strong&gt;. The consequences:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Human-centred responsibilities become automatable.&lt;/strong&gt; Research synthesis, competitive scans, early wireframes and copy, call triage, contract pre-review, first-line support, large chunks can be handled by AI agents.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Barriers to entry collapse.&lt;/strong&gt; Small, AI-augmented teams can rival large incumbents on output speed and variety.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Feedback loops compress.&lt;/strong&gt; AI cycles through ideate, build, measure faster, enabling “learn before launch” and rapid iteration.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Marty Cagan has noted the magnitude of this shift:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“What makes this discussion so hard is that almost every day things are changing … AI is a goldmine of opportunity. It’s also the biggest threat to how we do products.” — Marty Cagan&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Paweł Huryn, in collaboration with Miqdad Jaffer (Product Lead @ OpenAI), cautions that most failures come from &lt;strong&gt;treating AI like a bolt-on feature&lt;/strong&gt; rather than a system requiring new alignment, risk thinking and evaluation. Their widely shared &lt;strong&gt;AI PRD Template&lt;/strong&gt; forces teams to consider business case, failure modes (for example hallucinations), and guardrails up front, because skipping AI-specific considerations is exactly how teams ship clever demos that don’t survive contact with reality.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Given the hype around AI, many implement AI without a clear, justified business case… AI-specific considerations are often overlooked.” — &lt;em&gt;Product Compass (AI PRD, Huryn &amp;amp; Jaffer)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Read it here: &lt;a href=&quot;https://www.productcompass.pm/p/ai-prd-template&quot;&gt;AI PRD Template — Product Compass&lt;/a&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;3) What changes for Product Managers (and why junior tasks vanish)&lt;/h2&gt;
&lt;p&gt;For years, junior PMs climbed by doing &lt;strong&gt;repeatable, tactical&lt;/strong&gt; work:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Writing user stories and acceptance criteria&lt;/li&gt;
&lt;li&gt;Summarising interviews and clustering insights&lt;/li&gt;
&lt;li&gt;Competitive analysis and teardown decks&lt;/li&gt;
&lt;li&gt;Drafting copy and low-fi wireframes&lt;/li&gt;
&lt;li&gt;Pulling analytics and building dashboards&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;AI agents can already handle large portions of this. Huryn and Jaffer both emphasise &lt;em&gt;AI literacy&lt;/em&gt; and &lt;em&gt;AI intuition&lt;/em&gt; as the new baseline: understanding probabilistic behaviour, non-determinism, error analysis, and how to iterate prompts, retrieval and context.&lt;/p&gt;
&lt;p&gt;Marty Cagan captures the enduring essence of PM work:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“The most important thing to remember about product management is that it’s not your job to make things happen, it’s your job to make sure things happen.” — Marty Cagan&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This distinction is even sharper in the AI era. PMs must orchestrate, align, and decide, not write all the tickets themselves.&lt;/p&gt;
&lt;p&gt;Lenny Rachitsky echoes this: AI will automate some PM activities, while amplifying others (influence, judgement, storytelling, prioritisation). PMs are &lt;strong&gt;well positioned&lt;/strong&gt; if they move up-stack to what cannot be automated: &lt;a href=&quot;https://www.lennysnewsletter.com/p/why-pms-are-best-positioned-to-thrive&quot;&gt;Why PMs Are Best Positioned to Thrive in the AI Era&lt;/a&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;4) A possible playbook to adapt&lt;/h2&gt;
&lt;p&gt;There is no single blueprint for how Product Managers should respond to the AI era. Each organisation, product, and market context will be different. That said, there are some patterns that appear promising, and which could serve as a starting point for PMs looking to adapt. Think of this less as a prescriptive checklist and more as a set of suggestions that might help guide your approach.&lt;/p&gt;
&lt;h3&gt;Build AI literacy and “AI product sense”&lt;/h3&gt;
&lt;p&gt;You don’t need to become a data scientist, but you do need to understand how AI systems behave in practice. Concepts like &lt;strong&gt;embeddings&lt;/strong&gt;, &lt;strong&gt;retrieval-augmented generation (RAG)&lt;/strong&gt;, &lt;strong&gt;fine-tuning&lt;/strong&gt;, &lt;strong&gt;hallucination&lt;/strong&gt;, and &lt;strong&gt;model drift&lt;/strong&gt; should feel as familiar to you as “MVP” or “retention curve”. Without this fluency, it’s difficult to make credible product decisions or challenge technical trade-offs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; Instead of just accepting an engineering estimate, a PM could prototype a feature themselves using a no-code tool like LangChain or a custom GPT. This hands-on experiment helps them understand whether the model reliably returns accurate results. They can then have an informed discussion with engineering about latency, cost per API call, and what failure handling will look like.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;Use AI-augmented discovery (but validate with people)&lt;/h3&gt;
&lt;p&gt;Discovery is still about learning what customers need, but AI lets you accelerate early steps. Large language models can cluster survey responses, identify patterns in support tickets, and even suggest unmet needs. But this is a &lt;em&gt;starting point&lt;/em&gt;, not a substitute for talking to customers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; Imagine you receive 1,000 pieces of open-text feedback from beta users. Instead of reading them all, you feed them into an AI model to generate clusters: “speed issues”, “confusing navigation”, “missing integrations”. The PM then validates those categories by calling five users in each cluster to ask: &lt;em&gt;“Tell me more about when this slows you down.”&lt;/em&gt; AI accelerates pattern recognition, humans still validate what matters most.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;Redesign metrics for outcomes, not outputs&lt;/h3&gt;
&lt;p&gt;Traditional product metrics, velocity, number of tickets closed, or features shipped, are less meaningful when AI can generate outputs instantly. Instead, focus on &lt;strong&gt;outcomes&lt;/strong&gt;: do customers &lt;em&gt;trust&lt;/em&gt; the AI? Are they &lt;em&gt;retaining&lt;/em&gt;? Do they &lt;em&gt;get value faster&lt;/em&gt;?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; A team launches an AI-powered writing assistant. Instead of reporting “we shipped 12 features”, the PM tracks &lt;strong&gt;Time to Value (TTV)&lt;/strong&gt;, how quickly a new user produces their &lt;em&gt;first useful draft&lt;/em&gt;. If TTV drops from 20 minutes to 3 minutes after redesigning onboarding, that’s evidence the AI is working. Similarly, tracking &lt;strong&gt;Time to Learn (TTL)&lt;/strong&gt; (for example how fast the team can validate whether a new prompt-engineering change reduces hallucinations) becomes central to iteration speed.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;Treat safety, ethics and explainability as product requirements&lt;/h3&gt;
&lt;p&gt;AI systems behave probabilistically, they can and will fail in unexpected ways. If PMs ignore ethical risks, bias, or explainability, they risk reputational damage or regulatory pushback. Treat these as &lt;em&gt;first-class features&lt;/em&gt; in your roadmap, not afterthoughts.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; A PM working on an AI-powered recruitment tool might add a requirement: &lt;em&gt;“The system must provide a plain-language rationale for why a candidate was shortlisted.”&lt;/em&gt; This could take the form of an “explain” button showing which parts of the CV matched the role description. Rather than being an extra feature, explainability is part of the core product requirement, because trust is the product.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;Lead operating-model change (not just roadmaps)&lt;/h3&gt;
&lt;p&gt;AI is not just a product feature, it changes how organisations must work. PMs need to step into a leadership role, helping decide how teams are structured, how budgets are allocated, and where human oversight is required.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; At a SaaS company, a PM notices multiple teams are experimenting with their own AI chatbots, duplicating work. Instead of letting each team go their own way, the PM champions the creation of a &lt;strong&gt;shared AI platform team&lt;/strong&gt;. This avoids duplicated infrastructure costs and ensures consistent guardrails for safety. In doing so, the PM is not just managing a feature, they are helping re-architect the operating model.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;Become the narrative architect&lt;/h3&gt;
&lt;p&gt;With AI products, much of the PM’s influence comes from explaining &lt;em&gt;why this AI matters&lt;/em&gt;, &lt;em&gt;how it works&lt;/em&gt;, and &lt;em&gt;what guardrails are in place&lt;/em&gt;. Engineers, executives, sales, and customers all need confidence. A good PM turns technical complexity into a story people can rally around.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; A PM launching an AI-powered financial advisor doesn’t just say “it uses GPT-4 for recommendations”. Instead, they frame it as: &lt;em&gt;“Our AI helps you make decisions in seconds that used to take hours. It suggests options but always gives you the source so you can double-check. Think of it as a co-pilot, not a replacement for your judgement.”&lt;/em&gt; This framing reassures compliance teams, inspires marketing, and sets realistic customer expectations.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Final Thoughts&lt;/h2&gt;
&lt;p&gt;Perez and Kersten remind us that turning points don’t reward half-measures. You can’t “install” a technological revolution, you &lt;strong&gt;rebuild around it&lt;/strong&gt;. AI isn’t just another wave inside the Age of Software, it’s a fundamental shift in &lt;strong&gt;how cognitive work is done&lt;/strong&gt;. That means roles, operating models, and industries are up for renegotiation.&lt;/p&gt;
&lt;p&gt;For Product Managers, the implication is clear:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The tactical ladder is disappearing as AI handles junior-level tasks.&lt;/li&gt;
&lt;li&gt;The new job is judgement, orchestration, ethics, metrics, and organisational change.&lt;/li&gt;
&lt;li&gt;The winners won’t be the teams that “adopt a framework”, but the ones that &lt;strong&gt;overhaul how they operate&lt;/strong&gt;, anchored in outcomes and trust.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The turning point is here. &lt;strong&gt;Adapt now, or be reshaped by the next age.&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;References &amp;amp; Further Reading&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.amazon.co.uk/Project-Product-Networks-Transform-Business/dp/1942788398&quot;&gt;Mik Kersten, &lt;em&gt;Project to Product&lt;/em&gt; (Amazon UK)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.amazon.co.uk/Technological-Revolutions-Financial-Capital-Dynamics/dp/1843763311&quot;&gt;Carlota Perez, &lt;em&gt;Technological Revolutions and Financial Capital&lt;/em&gt; (Amazon UK)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.productcompass.pm/&quot;&gt;Paweł Huryn — Product Compass&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.productcompass.pm/p/ai-prd-template&quot;&gt;Paweł Huryn &amp;amp; Miqdad Jaffer (Product Lead @ OpenAI) — AI PRD Template&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.productcompass.pm/p/ai-product-manager-glossary&quot;&gt;Paweł Huryn — AI Glossary&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.productcompass.pm/p/the-ultimate-list-of-product-metrics&quot;&gt;Paweł Huryn — The Ultimate List of Product Metrics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.svpg.com/ai-product-management/&quot;&gt;Marty Cagan (SVPG) — AI Product Management&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.svpg.com/creating-intelligent-products/&quot;&gt;Marty Cagan (SVPG) — Creating Intelligent Products&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/why-pms-are-best-positioned-to-thrive&quot;&gt;Lenny Rachitsky — Why PMs Are Best Positioned to Thrive&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.lennysnewsletter.com/p/how-ai-will-impact-product-management&quot;&gt;Lenny Rachitsky — How AI Will Impact PM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.news.aakashg.com/p/ai-prd&quot;&gt;Aakash Gupta — The AI PRD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded><category>AI</category><category>Product Management</category><category>Technological Revolutions</category><category>Carlota Perez</category><category>Mik Kersten</category><category>Pawel Huryn</category><category>Miqdad Jaffer</category><author>Steve James</author></item><item><title>Scaling Agile Without Losing Its Soul: A Product Leader’s Perspective on SAFe, Flow, and Value Streams</title><link>https://stvpj.com/blog/scaling-agile/</link><guid isPermaLink="true">https://stvpj.com/blog/scaling-agile/</guid><description>How Product leaders can scale Agile organisations thoughtfully, leveraging frameworks like SAFe, Flow, and the Product Operating Model without losing Agile&apos;s core principles.</description><pubDate>Thu, 14 Aug 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Agile began with a promise: faster delivery, empowered teams, and relentless customer focus.&lt;/p&gt;
&lt;p&gt;For those of us leading Product organisations, staying true to those ideals becomes exponentially harder as we scale — when small, nimble teams become hundreds of engineers and multiple interdependent systems.&lt;/p&gt;
&lt;p&gt;The question becomes:&lt;br /&gt;
&lt;strong&gt;How do we scale without losing what made Agile powerful in the first place?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Many have tried to answer it.&lt;br /&gt;
&lt;strong&gt;Dean Leffingwell’s SAFe&lt;/strong&gt;, &lt;strong&gt;Marty Cagan’s Product Operating Model&lt;/strong&gt;, &lt;strong&gt;Mik Kersten’s Flow Framework&lt;/strong&gt; — each offers valuable tools and insights.&lt;/p&gt;
&lt;p&gt;But here&apos;s the truth:&lt;br /&gt;
&lt;strong&gt;There is no silver bullet.&lt;/strong&gt; No framework, no model, no operating system can simply be &quot;installed&quot; to make scaling Agile effortless.&lt;/p&gt;
&lt;p&gt;Instead, great Product organisations treat these frameworks as &lt;strong&gt;toolkits&lt;/strong&gt;, not dogma.&lt;br /&gt;
We &lt;strong&gt;adapt concepts thoughtfully&lt;/strong&gt; — never blindly applying &quot;best practices&quot; without understanding our context.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What SAFe Gets Right About Scaling Complex Systems&lt;/h2&gt;
&lt;p&gt;At its core, &lt;strong&gt;SAFe offers something important&lt;/strong&gt;:&lt;br /&gt;
A way to think about scaling agility across large, high-risk, high-complexity environments where many teams must move in concert.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Value Stream Layer&lt;/strong&gt; in SAFe recognises that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Multiple Agile Release Trains (ARTs) must be coordinated without losing autonomy.&lt;/li&gt;
&lt;li&gt;Larger-scale solution development requires new roles (like Value Stream Engineers and Solution Management) to balance technical, process, and content leadership.&lt;/li&gt;
&lt;li&gt;Solutions live inside complex contexts, and understanding that environment is essential.&lt;/li&gt;
&lt;li&gt;Capabilities must be incrementally delivered, not deferred into giant waterfall releases.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These concepts — &lt;strong&gt;Value Streams&lt;/strong&gt;, &lt;strong&gt;Solution Intent&lt;/strong&gt;, &lt;strong&gt;Capabilities&lt;/strong&gt;, &lt;strong&gt;customer collaboration across scale&lt;/strong&gt; — are powerful ideas that every scaling organisation should grapple with, whether they formally &quot;use SAFe&quot; or not.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Where Scaling Efforts Go Wrong&lt;/h2&gt;
&lt;p&gt;The failure mode isn’t SAFe itself, or Flow, or any other model.&lt;br /&gt;
It’s how organisations &lt;strong&gt;misinterpret or misuse&lt;/strong&gt; them:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Turning flexible frameworks into rigid bureaucratic process templates.&lt;/li&gt;
&lt;li&gt;Prioritising governance and reporting over outcomes and learning.&lt;/li&gt;
&lt;li&gt;Smothering teams under compliance paperwork instead of enabling autonomy.&lt;/li&gt;
&lt;li&gt;Rebuilding the very command-and-control structures Agile was meant to replace — but now with new names.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When scaling frameworks are &lt;strong&gt;weaponised as control mechanisms&lt;/strong&gt;, we don&apos;t just fail to scale Agile — we actively destroy it.&lt;/p&gt;
&lt;p&gt;The tragedy is that organisations then blame the framework, when the real issue is a failure of &lt;strong&gt;mindset and leadership&lt;/strong&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Leading Scaled Agile - Principles Over Prescriptions&lt;/h2&gt;
&lt;p&gt;As a Product leader, my focus is not on &quot;installing&quot; frameworks.&lt;br /&gt;
It’s on &lt;strong&gt;guiding scaling efforts based on enduring Agile principles&lt;/strong&gt;:&lt;/p&gt;
&lt;h3&gt;1. &lt;strong&gt;Organise Around Value&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Start with Value Streams — not project budgets, not team head-counts.&lt;br /&gt;
Understand how value flows to customers and align teams around that.&lt;/p&gt;
&lt;h3&gt;2. &lt;strong&gt;Empower Teams at Every Level&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;No matter how large the system, autonomy remains essential.&lt;br /&gt;
Roles like Solution Management or Value Stream Engineering exist to enable teams — not control them.&lt;/p&gt;
&lt;h3&gt;3. &lt;strong&gt;Deliver Incrementally&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Whether you call them Features, Capabilities, or Flow Items, the mandate is the same:&lt;br /&gt;
Ship value early and often. Learn fast. Avoid big-bang integration disasters.&lt;/p&gt;
&lt;h3&gt;4. &lt;strong&gt;Preserve Flexibility, Then Converge&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Scaling doesn’t mean fixing all requirements up front.&lt;br /&gt;
It means managing uncertainty thoughtfully — using modelling, analysis, and incremental validation to &lt;em&gt;converge on the right solution&lt;/em&gt; over time.&lt;/p&gt;
&lt;h3&gt;5. &lt;strong&gt;Treat Customers and Suppliers as Full Members of the Value Stream&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Customers aren&apos;t stakeholders &quot;over there.&quot;&lt;br /&gt;
Suppliers aren&apos;t external vendors.&lt;br /&gt;
They are part of the end-to-end system and must be engaged continuously, not just at contract milestones.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Frameworks Are Maps, Not Mandates&lt;/h2&gt;
&lt;p&gt;SAFe offers a map.&lt;br /&gt;
So does Cagan&apos;s Product Operating Model.&lt;br /&gt;
So does Kersten&apos;s Flow Framework.&lt;/p&gt;
&lt;p&gt;But &lt;strong&gt;maps are not a mandate&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Every organisation is different:&lt;br /&gt;
Its market, its technology, its history, its people.&lt;br /&gt;
Blindly applying someone else’s map will inevitably lead you into dead ends and wasted effort.&lt;/p&gt;
&lt;p&gt;Instead, &lt;strong&gt;Product leadership at scale is about thoughtful adaptation&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Borrow the best concepts.&lt;/li&gt;
&lt;li&gt;Respect the need for coordination at scale.&lt;/li&gt;
&lt;li&gt;Preserve Agile’s core — customer focus, empowered teams, fast feedback, continuous learning.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And above all, &lt;strong&gt;stay skeptical of anything that claims to be a one-size-fits-all solution&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Agility at scale is possible.&lt;br /&gt;
But it’s not installed.&lt;br /&gt;
It’s built — deliberately, collaboratively, patiently, and always with eyes open.&lt;/p&gt;
</content:encoded><category>Agile</category><category>SAFe</category><category>Flow Framework</category><category>Product Management</category><category>Scaling Agile</category><category>Value Streams</category><author>Steve James</author></item><item><title>The Innovation-Killing Myth of Predictability</title><link>https://stvpj.com/blog/embracing-uncertainty/</link><guid isPermaLink="true">https://stvpj.com/blog/embracing-uncertainty/</guid><description>How our obsession with certainty is undermining our ability to innovate, and what we should do instead.</description><pubDate>Thu, 24 Jul 2025 00:00:00 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;&quot;It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.&quot;&lt;br /&gt;
—Mark Twain&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;For decades, businesses have treated predictability as the holy grail of software development. If we can just get the estimates right, tighten up our roadmaps, and control the delivery pipeline, we tell ourselves, innovation will follow. But the truth is more uncomfortable: &lt;strong&gt;our pursuit of certainty is actively killing innovation.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Why Innovation Matters More Than Ever&lt;/h2&gt;
&lt;p&gt;Chasing predictable practices leads to predictable outcomes. And while predictability might offer short-term comfort, it rarely delivers long-term growth. Innovation is the engine of differentiation. Without it, your product or service becomes commoditised, and when you’re only competing on price, you&apos;re in a race to the bottom.&lt;/p&gt;
&lt;p&gt;As &lt;a href=&quot;https://profgalloway.com/&quot;&gt;Scott Galloway&lt;/a&gt; puts it, there are three questions every product or service must answer to succeed:&lt;/p&gt;
&lt;h3&gt;1. Differentiation&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;Is the product or service truly differentiated? Does the strategy clear the hurdle of differentiation from our competitors?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If your offering isn’t meaningfully different, then why should customers choose you? Innovation is how you escape the sea of sameness. Differentiation isn’t just about having more features, it’s about positioning, usability, quality, emotional appeal, and solving a real pain better than anyone else. If you look like everyone else in your market, then price is all that’s left to compete on.&lt;/p&gt;
&lt;h3&gt;2. Relevance&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;Does anyone care about the differentiation?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Relevance is the filter that determines whether differentiation has any actual value. You might have an elegant solution, but if it doesn’t speak to your users&apos; jobs-to-be-done, it&apos;s irrelevant. Relevance means being attuned to changing customer needs, market dynamics, and usage patterns. It’s not just about what you can do, it’s about what your users are trying to do.&lt;/p&gt;
&lt;h3&gt;3. Sustainability&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;What steps can we take to ensure that our products and services continue to be differentiated and relevant?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Markets evolve. Competitors copy. Technology shifts. A point of differentiation today can be table stakes tomorrow. Sustainability means building a repeatable system of learning and adapting. That includes investing in research, creating feedback loops, enabling rapid iteration, and maintaining organisational flexibility.&lt;/p&gt;
&lt;p&gt;In short: without innovation, your brand is just a logo. With it, you build something people want, remember, and come back for.&lt;/p&gt;
&lt;h2&gt;The Cost of Certainty&lt;/h2&gt;
&lt;p&gt;Donald Reinertsen, in &lt;a href=&quot;https://www.amazon.com/dp/1935401009&quot;&gt;&lt;em&gt;The Principles of Product Development Flow&lt;/em&gt;&lt;/a&gt;, puts it succinctly:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“We will create risk-averse development processes that strive to &apos;do it right the first time.&apos; This risk aversion will drive innovation out of our development process.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In trying to remove variability, we remove the very conditions required for discovery.&lt;/p&gt;
&lt;h2&gt;Variability Is the Work&lt;/h2&gt;
&lt;p&gt;Lean and Agile weren’t designed to enforce predictability, they were created to &lt;strong&gt;manage variability in service of value&lt;/strong&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Responding to change over following a plan.”&lt;br /&gt;
— &lt;a href=&quot;https://agilemanifesto.org/&quot;&gt;Agile Manifesto&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Marty Cagan is clear on this point in &lt;a href=&quot;https://www.svpg.com/inspired-how-to-create-products-customers-love/&quot;&gt;&lt;em&gt;Inspired&lt;/em&gt;&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“No software development methodology, Agile, Lean, or otherwise, can ensure predictability.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The goal is not to follow a plan perfectly. It’s to inspect, adapt, and course-correct based on what you learn. As &lt;a href=&quot;https://dannorth.net/&quot;&gt;Dan North&lt;/a&gt; puts it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Predictability is a trap. If you&apos;re measuring success by your ability to predict the future, you’re optimising for the wrong thing.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Agile works not because it avoids variability, but because it embraces it.&lt;/p&gt;
&lt;h2&gt;Context Matters: The Cynefin Framework&lt;/h2&gt;
&lt;p&gt;Dave Snowden’s &lt;a href=&quot;https://www.cognitive-edge.com/the-cynefin-framework/&quot;&gt;Cynefin Framework&lt;/a&gt; is a decision-making model for managing uncertainty. Most innovation work happens in the &lt;strong&gt;Complex&lt;/strong&gt; domain — where cause and effect can only be known in retrospect.&lt;/p&gt;
&lt;p&gt;When we force complex work into predictable containers, we choke innovation.&lt;/p&gt;
&lt;h2&gt;Local Optimisations, Global Dysfunction&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&quot;https://itrevolution.com/the-phoenix-project/&quot;&gt;&lt;em&gt;The Phoenix Project&lt;/em&gt;&lt;/a&gt;, Gene Kim and co-authors write:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Local optimisations are the enemy of global flow.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;When teams are judged by outputs instead of outcomes, they optimise for activity, not value. As Marty Cagan reinforces in &lt;a href=&quot;https://www.svpg.com/empowered-ordinary-people-extraordinary-products/&quot;&gt;&lt;em&gt;Empowered&lt;/em&gt;&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Empowered product teams are assigned problems to solve... and are held accountable to results.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Why We Still Chase Control&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&quot;https://www.amazon.co.uk/Radical-Focus-Achieving-Important-Objectives/dp/0996006028&quot;&gt;&lt;em&gt;Radical Focus&lt;/em&gt;&lt;/a&gt;, Christina Wodtke warns against turning OKRs into glorified task lists.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://www.goodreads.com/book/show/13530973-antifragile&quot;&gt;&lt;em&gt;Antifragile&lt;/em&gt;&lt;/a&gt;, Nassim Taleb argues:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“If you see fraud and do not say fraud, you are a fraud.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;We pretend to plan our way out of complexity. But planning is not foresight, it’s often just structured denial.&lt;/p&gt;
&lt;h2&gt;A Better Way Forward&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Shift from Projects to Products&lt;/strong&gt; Long-lived teams build context and evolve solutions.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use Objectives, Not Deadlines&lt;/strong&gt; Let teams focus on problems, not fixed outputs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create Slack for Innovation&lt;/strong&gt; 100% utilisation destroys adaptability.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Trust Teams with the Why&lt;/strong&gt; Empower them to find the right how.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Measure What Matters&lt;/strong&gt; Focus on outcomes, not just throughput.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Match Approach to Domain&lt;/strong&gt; Use Cynefin to know when to explore and when to execute.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Design for Differentiation, Relevance, and Sustainability&lt;/strong&gt; Solve meaningful problems in ways that last.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Final Thoughts&lt;/h2&gt;
&lt;p&gt;Predictability isn’t inherently bad, but it must be applied with care. Roadmaps, OKRs, and deadlines are tools, not doctrine. When everything is optimised for predictability, innovation withers.&lt;/p&gt;
&lt;p&gt;The real work of building great products lies in navigating uncertainty, not eliminating it.&lt;/p&gt;
&lt;p&gt;Because if you can predict it, you’re probably not innovating.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Innovation is the child of freedom and the parent of growth.”&lt;br /&gt;
—William J. McDonough&lt;/p&gt;
&lt;/blockquote&gt;
</content:encoded><category>Innovation</category><category>Software Development</category><category>Uncertainty</category><category>Agile</category><category>Product Management</category><author>Steve James</author></item><item><title>Understanding Before Arguing: Why Belief Without Evidence Is So Dangerous</title><link>https://stvpj.com/blog/understanding-before-arguing/</link><guid isPermaLink="true">https://stvpj.com/blog/understanding-before-arguing/</guid><description>Exploring the importance of intellectual humility, the dangers of belief without evidence, and how we can build better conversations in a divided world.</description><pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently, I came across a quote from Charlie Munger that has been stuck in my head:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;At first glance, it seems like common sense. But the more I sit with it, the more I realise how rare, and radical, it actually is.&lt;/p&gt;
&lt;p&gt;We live in a time where belief, not evidence, not nuance, not data, often seems to be the primary currency for truth. As long as something aligns with what we already think, it feels “right.” The problem is, that kind of thinking doesn’t leave a lot of room for understanding. It turns political identity into a kind of faith system: &lt;em&gt;I believe, therefore it is true. You don’t, therefore you are wrong, or worse, dangerous.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;What Munger’s talking about is a kind of intellectual discipline, or maybe just humility. It’s the idea that before you criticise someone’s viewpoint, you should be able to explain it fairly and fully, in a way they would recognise. It’s not about agreeing with them. It’s about giving their position the respect of genuine attention.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why This Matters in How We Argue&lt;/h2&gt;
&lt;p&gt;I think about this every time I scroll through political debates online. So many arguments are just people yelling past each other, beating up a cartoon version of the other side. We call this &lt;strong&gt;strawmanning&lt;/strong&gt;, the act of misrepresenting someone’s position so it’s easier to attack. Instead of engaging with what someone actually believes, we invent a flimsy imitation and tear that down.&lt;/p&gt;
&lt;p&gt;Closely related, and just as common, is the misuse of a rhetorical tool called &lt;strong&gt;&lt;em&gt;reductio ad absurdum&lt;/em&gt;&lt;/strong&gt;. In its proper form, it’s a logical technique used to disprove a point by showing that it leads to an absurd or contradictory outcome (&lt;a href=&quot;https://www.britannica.com/topic/reductio-ad-absurdum&quot;&gt;Britanica&lt;/a&gt;). But in everyday argument, it’s often weaponised: someone exaggerates their opponent’s view to an extreme or ridiculous scenario, and then critiques that instead.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“You want to regulate car emissions? What’s next — banning all cars and making us walk to work?”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;At that point, the original argument isn’t even in the room anymore. It’s not dialogue, it’s mockery dressed up as logic.&lt;/p&gt;
&lt;p&gt;What we need more of is the opposite: &lt;strong&gt;steelmanning&lt;/strong&gt;, engaging with the strongest, most thoughtful version of someone else’s view, even if (especially if) we disagree with it.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Reverence and Risk of Belief&lt;/h2&gt;
&lt;p&gt;Belief and faith are often seen as admirable, especially in religious traditions. They signal conviction, loyalty, identity. But that admiration can come at a cost when belief is decoupled from evidence.&lt;/p&gt;
&lt;p&gt;Philosopher W.K. Clifford made this point forcefully:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.”&lt;/em&gt;&lt;br /&gt;
(&lt;a href=&quot;https://1000wordphilosophy.com/2022/01/28/ethics-of-belief/&quot;&gt;Clifford&apos;s Principle&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Clifford argued that belief isn’t just a private matter, it shapes our actions and our society. When we build entire ideologies, policies, or communities around ideas we haven&apos;t truly tested, we open the door to harm.&lt;/p&gt;
&lt;p&gt;And yet, the opposite view exists too. William James argued that sometimes belief &lt;em&gt;without&lt;/em&gt; evidence is necessary, like believing in the possibility of love before it’s proven, or the success of a risky venture. He called this &lt;em&gt;“The Will to Believe”&lt;/em&gt; (&lt;a href=&quot;https://en.wikipedia.org/wiki/The_Will_to_Believe&quot;&gt;source&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Both positions reflect the complexity of belief. But in public discourse, when beliefs calcify into identity, when “I believe” becomes a substitute for “I know”, the danger increases.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;This Isn’t Just Politics&lt;/h2&gt;
&lt;p&gt;It’s tempting to think this only applies to politics, Labour vs Tory, Democrat vs Republican, Leave vs Remain. But the same dynamic shows up everywhere:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;iOS vs Android&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Religion vs Atheism&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Meat eaters vs Vegans&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pro-vaccine vs Vaccine-sceptic&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Climate change believers vs deniers&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Free speech absolutists vs content moderation advocates&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Science vs spirituality&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Parenting styles: Gentle vs Authoritative&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These debates often escalate not because the issues are unsolvable, but because each side reduces the other to caricature. We don’t engage, we retreat into tribes.&lt;/p&gt;
&lt;p&gt;But what if, instead of digging in, we paused and asked: &lt;em&gt;Could I explain their point of view, fairly, generously, clearly?&lt;/em&gt; That’s the kind of thinking Munger was getting at.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why We Struggle: The Psychology of Stubborn Belief&lt;/h2&gt;
&lt;p&gt;Why is this so hard?&lt;/p&gt;
&lt;p&gt;Because we’re not nearly as rational as we think we are. Multiple psychological effects explain why we hold tight to beliefs, even when they’re wrong.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Motivated reasoning&lt;/strong&gt;: We unconsciously distort facts to support what we already believe. As psychologist Peter Ditto puts it, this is “the emotional tail wagging the rational dog.” (&lt;a href=&quot;https://www.mainepublic.org/show/maine-calling/2019-08-22/peter-ditto-the-psychology-of-political-polarization-motivated-reasoning&quot;&gt;Maine Public&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Confirmation bias&lt;/strong&gt;: We selectively seek and favour information that confirms our views and avoid what challenges them.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Belief perseverance&lt;/strong&gt;: Even after being shown that a belief is wrong, we often continue to cling to it. The original belief becomes anchored in our sense of identity.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And at a deeper level, &lt;strong&gt;Moral Foundations Theory&lt;/strong&gt;, developed by Jonathan Haidt, shows that people weight moral values like fairness, loyalty, authority, or sanctity very differently, which makes certain arguments persuasive to one group and meaningless to another. (&lt;a href=&quot;https://moralfoundations.org&quot;&gt;moralfoundations.org&lt;/a&gt;)&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;A Simple Challenge&lt;/h2&gt;
&lt;p&gt;So what do we do with all this?&lt;/p&gt;
&lt;p&gt;Here’s a challenge I’m trying to live by, and one I think Charlie Munger would appreciate:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Pause&lt;/strong&gt; before jumping into disagreement.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Steelman&lt;/strong&gt; the opposing view: articulate it as clearly and charitably as you can.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ask yourself&lt;/strong&gt;: “Would they agree with my summary of their position?”&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Only then&lt;/strong&gt; — if you still feel the need — share your response.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Because understanding doesn’t weaken your argument. It strengthens your humanity.&lt;/p&gt;
&lt;p&gt;And maybe, if more of us did this, we’d stop yelling across divides and start building bridges across them.&lt;/p&gt;
</content:encoded><category>Belief</category><category>Faith</category><category>Politics</category><category>Psychology</category><category>Debate</category><category>Charlie Munger</category><category>Society</category><category>Culture</category><category>Thinking</category><author>Steve James</author></item><item><title>How Security and Data Governance Improve User Experience</title><link>https://stvpj.com/blog/how-security-and-data-governance-improve-user-experience/</link><guid isPermaLink="true">https://stvpj.com/blog/how-security-and-data-governance-improve-user-experience/</guid><description>Exploring how strong security practices and transparent data governance not only protect users but also build trust, enhance engagement, and create a better overall user experience.</description><pubDate>Wed, 07 May 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We live in an age where our personal information is increasingly digitised. Every click, every search, and every online purchase leaves behind a digital footprint. As a Product Manager, my main goal is not just to deliver feature-rich applications but to ensure that these applications are safe, trustworthy, and provide an excellent user experience. One might assume that optimising for security and data governance might hinder the user experience. Contrary to that belief, I&apos;m here to argue that enhancing security and data governance is, in fact, a significant booster to the overall user experience.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Building Trust&lt;/strong&gt; The most apparent benefit of security optimisation is trust. When users realise that a product or service prioritises their security, they are more likely to trust it. Trust, in the digital realm, is not just a nice-to-have; it&apos;s a necessity. A product that loses its users&apos; trust might as well lose its users.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Smooth Experience with Fewer Interruptions&lt;/strong&gt; Imagine a scenario where an application gets compromised. This would lead to downtimes, emergency maintenance, or even data breaches. Each of these outcomes disrupts the user experience. By investing in security from the get-go, we ensure a seamless experience for our users, free from unexpected disruptions.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Empowerment through Transparency&lt;/strong&gt; Data governance isn&apos;t just about keeping user data safe; it&apos;s about being transparent about how this data is used. By giving users a clear insight into what data is collected, why it&apos;s collected, and how it&apos;s used, we empower them. An empowered user is an engaged user, and an engaged user is the key to enhanced user experience.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Tailored User Experiences without the Creepiness&lt;/strong&gt; One of the main reasons companies collect data is to provide a more tailored experience to their users. However, there&apos;s a fine line between personalisation and intrusiveness. Proper data governance ensures that personalisation is done right, without crossing the boundaries of user comfort.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Compliance Equals Access&lt;/strong&gt; In today&apos;s global market, regulatory compliance is not optional. Regulations like GDPR in Europe or CCPA in California have strict requirements regarding data privacy and security. By adhering to these regulations, we not only avoid hefty fines but ensure that our product is accessible to a broader audience without regional restrictions.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Future-proofing the Business&lt;/strong&gt; Security threats and data misuse are not static; they evolve. By building a foundation based on robust security and data governance, we are preparing our product for the challenges of the future. This ensures that our users continue to have a consistent and safe experience even as digital landscapes change.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;User Feedback and Continuous Improvement&lt;/strong&gt; When users feel safe and trust a platform, they are more likely to provide genuine feedback. This feedback is invaluable. It allows us to continuously refine and improve, ensuring the user experience keeps getting better.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In conclusion, security and data governance are not mere checkboxes in the product development process. They are crucial pillars that hold the weight of user trust and engagement. By integrating them into the core of our products, we are not sacrificing user experience; we are elevating it.&lt;/p&gt;
&lt;p&gt;Remember, in the end, it&apos;s not just about building a product; it&apos;s about building a relationship with our users. And like any successful relationship, it must be built on trust, transparency, and mutual respect.&lt;/p&gt;
</content:encoded><category>Product Management</category><category>User Experience</category><category>Security</category><category>Data Governance</category><category>Trust</category><category>Compliance</category><author>Steve James</author></item><item><title>Embracing Iterative Requirement Development in Product Ownership</title><link>https://stvpj.com/blog/embracing-iterative-requirement-development/</link><guid isPermaLink="true">https://stvpj.com/blog/embracing-iterative-requirement-development/</guid><description>How modern product teams can move beyond traditional requirement processes and embrace Agile, empirical, and iterative approaches for greater success.</description><pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;&quot;The definition of insanity is doing the same thing over and over again and expecting different results.&quot;&lt;br /&gt;
— &lt;em&gt;Attributed to Einstein&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;h2&gt;The Myth of Up-Front Certainty&lt;/h2&gt;
&lt;p&gt;As a Product Owner, one of the most common questions I get is:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;“How do you manage requirement development in an Agile way?”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It’s a fair question — traditional product requirement processes often assume you can capture everything upfront, get it signed off, and hand it over to developers like a blueprint for a house.&lt;/p&gt;
&lt;p&gt;This old-school approach typically follows these steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Plan the analysis approach&lt;/li&gt;
&lt;li&gt;Elicit requirements&lt;/li&gt;
&lt;li&gt;Analyse and design the solution&lt;/li&gt;
&lt;li&gt;Prioritise the requirements&lt;/li&gt;
&lt;li&gt;Get approval and sign-off&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Once finalized, these requirements are passed along to the development team, with the expectation that the resulting product will match the original vision.&lt;/p&gt;
&lt;p&gt;But here&apos;s the problem: &lt;strong&gt;this process almost never works in software.&lt;/strong&gt; According to the &lt;a href=&quot;https://www.projectsmart.co.uk/white-papers/chaos-report.pdf&quot;&gt;CHAOS Report&lt;/a&gt; by the Standish Group, projects that follow this linear model consistently underperform — and many outright fail.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What Experience Has Taught Us&lt;/h2&gt;
&lt;p&gt;Years of building software products have made a few truths crystal clear:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You can’t know everything at the beginning.&lt;/li&gt;
&lt;li&gt;Requirements will change.&lt;/li&gt;
&lt;li&gt;Written requirements are always subject to interpretation.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Despite our desire for predictability, software development is &lt;strong&gt;inherently complex and uncertain&lt;/strong&gt;. Efforts to impose rigid plans often backfire, introducing waste, rework, and stakeholder frustration.&lt;/p&gt;
&lt;p&gt;Instead of trying to predict the future, successful product teams embrace &lt;strong&gt;empiricism&lt;/strong&gt; — the practice of making decisions based on what &lt;em&gt;is&lt;/em&gt;, rather than what we &lt;em&gt;hope&lt;/em&gt; will happen.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Empiricism means working in a fact-based, experience-based, and evidence-based manner.”&lt;br /&gt;
— &lt;a href=&quot;https://www.scrum.org/resources/blog/three-pillars-empiricism-scrum&quot;&gt;Scrum.org: The Three Pillars of Empiricism&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;h2&gt;Predictability vs Progress&lt;/h2&gt;
&lt;p&gt;In traditional settings, predictability is prized. But demanding it in uncertain environments leads to a dangerous illusion of control.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“We work in an uncertain world, and our main goal in pursuing agility is to confront the unknown… Pursuing predictability causes us to lay a veneer of fiction over the real world.”&lt;br /&gt;
— &lt;a href=&quot;https://www.scrum.org/resources/blog/escaping-predictability-trap&quot;&gt;Scrum.org: Escaping the Predictability Trap&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is why Agile frameworks such as Scrum promote &lt;strong&gt;short feedback loops&lt;/strong&gt;, &lt;strong&gt;continuous learning&lt;/strong&gt;, and &lt;strong&gt;transparent decision-making&lt;/strong&gt;. It allows Product Owners to continuously refine and reprioritize based on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Actual user feedback&lt;/li&gt;
&lt;li&gt;Technical insights&lt;/li&gt;
&lt;li&gt;Market signals&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this model, the product backlog is a &lt;strong&gt;living document&lt;/strong&gt;, not a requirements bible.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Iterative Requirement Development in Practice&lt;/h2&gt;
&lt;p&gt;So how should Product Owners approach requirements in this dynamic environment?&lt;/p&gt;
&lt;p&gt;Instead of detailed specs for a year’s worth of work, aim for just enough clarity — at just the right time.&lt;/p&gt;
&lt;p&gt;A healthy product backlog typically contains:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2–3 sprints worth of &lt;em&gt;refined&lt;/em&gt; stories&lt;/li&gt;
&lt;li&gt;Lightly defined epics for the next quarter&lt;/li&gt;
&lt;li&gt;High-level ideas for future exploration&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This model aligns with the &lt;a href=&quot;https://www.gyaco.com/en/2022/03/01/cone-of-uncertainty-is-another-reason-why-we-need-to-deliver-early-and-often/&quot;&gt;Cone of Uncertainty&lt;/a&gt; — a concept that acknowledges we know the least at the beginning of a project and gradually gain clarity through discovery and iteration.&lt;/p&gt;
&lt;p&gt;By delaying commitment until the last responsible moment, teams can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Minimize waste&lt;/li&gt;
&lt;li&gt;Reduce rework&lt;/li&gt;
&lt;li&gt;Respond quickly to change&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;The Role of Discovery and Up-Front Analysis&lt;/h2&gt;
&lt;p&gt;Does this mean we abandon up-front planning altogether?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Absolutely not.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Good Product Ownership requires &lt;strong&gt;thoughtful discovery work&lt;/strong&gt; — especially at the outset of new initiatives. The key is to balance early analysis with flexibility.&lt;/p&gt;
&lt;p&gt;Marty Cagan’s concept of &lt;a href=&quot;https://svpg.com/nature-of-product-discovery/&quot;&gt;Product Discovery&lt;/a&gt; offers a powerful framework for this. Discovery involves defining the right problems to solve, validating solutions with customers, and aligning stakeholders — all before investing in development.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“The purpose of product discovery is to quickly separate the good ideas from the bad. We don’t want to build things customers won’t use or that won’t work for the business.”&lt;br /&gt;
— &lt;a href=&quot;https://svpg.com/nature-of-product-discovery/&quot;&gt;Marty Cagan, SVPG&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Not every part of a product requires the same depth of analysis. Skilled Product Owners must evaluate:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What’s known and stable (requiring less detail)&lt;/li&gt;
&lt;li&gt;What’s unknown or risky (requiring deeper investigation)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This &lt;strong&gt;risk-based approach to discovery&lt;/strong&gt; ensures we invest time where it matters most.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Final Thoughts: Empiricism is Empowering&lt;/h2&gt;
&lt;p&gt;When Product Owners embrace iterative, empirical requirement development, they create the conditions for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Better alignment with end users&lt;/li&gt;
&lt;li&gt;More resilient roadmaps&lt;/li&gt;
&lt;li&gt;Higher-value outcomes&lt;/li&gt;
&lt;li&gt;Stronger partnerships with development teams&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We stop pretending we know the future and instead build the capability to &lt;strong&gt;respond to it&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;As Agile manifesto co-author Mike Cohn puts it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“We want to delay decisions until the last responsible moment to preserve flexibility and avoid unnecessary work.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you&apos;re still clinging to thick BRDs and waterfall mindsets in your product practice, it&apos;s time to evolve.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;Additional Reading&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.scrum.org/resources/blog/three-pillars-empiricism-scrum&quot;&gt;The Three Pillars of Empiricism in Scrum – Scrum.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.scrum.org/resources/blog/escaping-predictability-trap&quot;&gt;Escaping the Predictability Trap – Scrum.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.projectsmart.co.uk/white-papers/chaos-report.pdf&quot;&gt;CHAOS Report – Standish Group&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.agilealliance.org/glossary/cone-of-uncertainty&quot;&gt;Cone of Uncertainty – Agile Alliance&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://svpg.com/nature-of-product-discovery/&quot;&gt;Nature of Product Discovery – SVPG&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://svpg.com/dual-track-agile/&quot;&gt;Dual-Track Agile – Marty Cagan, SVPG&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded><category>Product Ownership</category><category>Agile</category><category>Requirement Development</category><category>Empiricism</category><category>Product Management</category><author>Steve James</author></item><item><title>Book Review: The Principles of Product Development Flow – A Masterclass in Modern Software Development Thinking</title><link>https://stvpj.com/blog/the-principles-of-product-development-flow/</link><guid isPermaLink="true">https://stvpj.com/blog/the-principles-of-product-development-flow/</guid><description>An in-depth review of Donald Reinertsen’s masterwork and why it’s essential reading for serious product managers.</description><pubDate>Thu, 24 Apr 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As an experienced Product Manager who draws inspiration from thought leaders like Mik Kersten (&lt;em&gt;Project to Product&lt;/em&gt;), Marty Cagan (&lt;em&gt;Inspired&lt;/em&gt;, &lt;em&gt;Empowered&lt;/em&gt;), and Gene Kim (&lt;em&gt;The Phoenix Project&lt;/em&gt;, &lt;em&gt;The Unicorn Project&lt;/em&gt;), Donald Reinertsen’s &lt;em&gt;The Principles of Product Development Flow&lt;/em&gt; feels like a deep intellectual homecoming.&lt;/p&gt;
&lt;p&gt;In a world where agile slogans are sometimes thrown around without understanding the economic drivers behind them, Reinertsen delivers a technical and philosophical tour de force. This is not a beginner&apos;s guide or a book of platitudes. It&apos;s a dense, deliberate, and at times, uncompromising exploration of how to truly think about product development — as a system of flows constrained by economics, physics, and human behavior.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What Stands Out&lt;/h2&gt;
&lt;p&gt;✅ &lt;strong&gt;Economics First&lt;/strong&gt; Reinertsen relentlessly emphasizes that every product development decision should be economically motivated. Whether deciding to ship with known bugs or balancing feature creep, he demands that we quantify the &lt;em&gt;Cost of Delay&lt;/em&gt;.&lt;br /&gt;
In the same spirit that Mik Kersten ties software delivery to business outcomes in &lt;a href=&quot;https://projecttoproduct.org/&quot;&gt;&lt;em&gt;Project to Product&lt;/em&gt;&lt;/a&gt;, Reinertsen reminds us: &quot;Proxy metrics are dangerous distractions. Profitability is king.&quot;&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Queues are the Hidden Killer&lt;/strong&gt; Just like WIP bottlenecks in SAFe or the flow disruptions that Kersten describes, Reinertsen lays bare that unmanaged queues — unseen and unmeasured — are what sabotage cycle time, morale, and innovation.&lt;br /&gt;
His industrial-grade treatment of queueing theory was a wake-up call for me: traditional timeline management is not just ineffective, it’s actively harmful if queues remain invisible.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Variability is Not the Enemy&lt;/strong&gt; Much like Gene Kim’s &quot;local optimizations are the enemy of global flow&quot; argument in &lt;a href=&quot;https://itrevolution.com/the-phoenix-project/&quot;&gt;&lt;em&gt;The Phoenix Project&lt;/em&gt;&lt;/a&gt;, Reinertsen’s nuanced view of variability resonates deeply.&lt;br /&gt;
In software, uncertainty is not a bug — it&apos;s the nature of the work. Instead of driving it out (as Six Sigma urges), we must manage and even harness variability economically.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;The Power of Small Batches and Fast Feedback&lt;/strong&gt; For any PM who loves the Lean Startup ethos but sometimes feels it&apos;s been dumbed down to &quot;fail fast&quot; memes, this book restores rigor. The push for smaller batch sizes and faster feedback isn’t just a process trick — it’s a calculated economic optimization.&lt;/p&gt;
&lt;p&gt;✅ &lt;strong&gt;Decentralization with Purpose&lt;/strong&gt; Echoing the autonomy principles Marty Cagan champions for empowered teams in &lt;a href=&quot;https://www.svpg.com/empowered/&quot;&gt;&lt;em&gt;Empowered&lt;/em&gt;&lt;/a&gt;, Reinertsen shows how decentralized decision-making — when combined with clear economic frameworks — is the only way to survive in high-variability environments.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Where It Might Challenge Modern PMs&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;No Handholding&lt;/strong&gt; This is not a &quot;feel-good&quot; book. Reinertsen assumes you have an appetite for systems thinking, applied mathematics, and a fair bit of discomfort as he dismantles traditional PM orthodoxies.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Economic Thinking is Hard&lt;/strong&gt; He demands more from PMs than many are used to. It’s no longer enough to ask “Is this feature valuable?” — you must ask “&lt;em&gt;How much is a week of delay on this feature worth to our lifecycle profits?&lt;/em&gt;”&lt;br /&gt;
It’s an intellectually heavier lift, but an essential one.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;Final Verdict: Essential for Serious Product Thinkers&lt;/h2&gt;
&lt;p&gt;If you’re serious about mastering product management in the age of continuous delivery, platform thinking, and relentless business alignment, &lt;em&gt;The Principles of Product Development Flow&lt;/em&gt; deserves a place alongside &lt;em&gt;Inspired&lt;/em&gt;, &lt;em&gt;Project to Product&lt;/em&gt;, and &lt;em&gt;The Phoenix Project&lt;/em&gt; on your shelf.&lt;/p&gt;
&lt;p&gt;More than anything, Reinertsen empowers product leaders to think for themselves — to ditch proxy metrics, the worship of efficiency, and one-size-fits-all agile prescriptions — and &lt;em&gt;design systems&lt;/em&gt; that deliver value economically, efficiently, and sustainably.&lt;/p&gt;
&lt;p&gt;⭐️⭐️⭐️⭐️⭐️ (5/5)&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Key Takeaways&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Quantify the Cost of Delay&lt;/strong&gt; - If you can only measure one thing, measure Cost of Delay. It’s the &quot;golden key&quot; to prioritization and flow optimization.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Prioritize Managing Queues Over Managing Timelines&lt;/strong&gt; - Unseen work-in-process queues create hidden delays and risk. Control them, and cycle time will take care of itself.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Variability Can Be an Asset&lt;/strong&gt; - Variability fuels innovation. Your goal isn’t to eliminate it, but to manage it economically.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Small Batch Sizes and Fast Feedback Are Non-Negotiable&lt;/strong&gt; - Move in small, economic increments to accelerate learning, reduce risk, and improve adaptability.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Push Decisions Down, But Align on Outcomes&lt;/strong&gt; - Empower teams with decentralized control frameworks tied tightly to economic priorities, not just project plans.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;h1&gt;References&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;The Principles of Product Development Flow&lt;/em&gt;, Donald Reinertsen, 2009&lt;/li&gt;
&lt;li&gt;Mik Kersten, &lt;a href=&quot;https://projecttoproduct.org/&quot;&gt;&lt;em&gt;Project to Product&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Marty Cagan, &lt;a href=&quot;https://www.svpg.com/inspired/&quot;&gt;&lt;em&gt;Inspired&lt;/em&gt;&lt;/a&gt; / &lt;a href=&quot;https://www.svpg.com/empowered/&quot;&gt;&lt;em&gt;Empowered&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Gene Kim, &lt;a href=&quot;https://itrevolution.com/the-phoenix-project/&quot;&gt;&lt;em&gt;The Phoenix Project&lt;/em&gt;&lt;/a&gt; / &lt;a href=&quot;https://itrevolution.com/the-unicorn-project/&quot;&gt;&lt;em&gt;The Unicorn Project&lt;/em&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded><category>Book Review</category><category>Product Development</category><category>Flow</category><category>Agile</category><category>Product Management</category><author>Steve James</author></item><item><title>From Project to Product: A Paradigm Shift for the Age of Software</title><link>https://stvpj.com/blog/from-project-to-product/</link><guid isPermaLink="true">https://stvpj.com/blog/from-project-to-product/</guid><description>A summary of Mik Kersten’s &apos;Project to Product,&apos; exploring why traditional project models are failing and how the Flow Framework can help organizations thrive in the Age of Software.</description><pubDate>Sun, 20 Apr 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;By Mik Kersten | Summary by STVPJ&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;As we progress through the digital era, the rules of business are rapidly changing. Companies that once dominated their industries are finding themselves outpaced by nimble startups and tech giants who’ve mastered the art of software delivery. In his groundbreaking book, &lt;em&gt;Project to Product&lt;/em&gt;, Mik Kersten reveals why traditional project-based management models are failing and offers a new framework—the &lt;strong&gt;Flow Framework&lt;/strong&gt;—to help organizations survive and thrive in the Age of Software.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Premise: Why the Project Model is Obsolete&lt;/h2&gt;
&lt;p&gt;Kersten opens the book with a compelling argument: the project-based structures that powered businesses through the Age of Mass Production are no longer fit for today’s software-driven world. Instead of treating software delivery as a series of finite, scope-bound projects, companies must adopt a &lt;strong&gt;product mindset&lt;/strong&gt;—focusing on long-term value streams and continuous delivery.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;We&apos;re managing software like we&apos;re still building bridges.&quot; — Mik Kersten&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The old model, built for predictability and control, stifles innovation. It creates silos, breaks accountability, and fails to account for the non-linear, iterative nature of modern software development.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Flow Framework: A New Way to Think About Work&lt;/h2&gt;
&lt;p&gt;At the core of the book is the &lt;strong&gt;Flow Framework&lt;/strong&gt;, Kersten’s blueprint for transforming how organizations manage software delivery. It introduces four types of work items that flow through software value streams:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Features&lt;/strong&gt; – User-driven enhancements&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Defects&lt;/strong&gt; – Quality issues&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Risks&lt;/strong&gt; – Security, compliance, or other risk-related work&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Debts&lt;/strong&gt; – Technical or infrastructure improvements&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These “Flow Items” are the units of measurement in the Flow Framework. Organizations then track &lt;strong&gt;Flow Metrics&lt;/strong&gt; like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Flow Velocity&lt;/strong&gt; - &lt;strong&gt;Flow Efficiency&lt;/strong&gt; - &lt;strong&gt;Flow Time&lt;/strong&gt; - &lt;strong&gt;Flow Load&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The ultimate goal? &lt;strong&gt;Aligning IT with business outcomes&lt;/strong&gt; in a measurable, real-time way.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Age of Software and the Turning Point&lt;/h2&gt;
&lt;p&gt;Kersten situates this shift in the broader context of Carlota Perez’s theory of technological revolutions. We are now entering the &lt;strong&gt;Deployment Period&lt;/strong&gt; of the &lt;strong&gt;Age of Software&lt;/strong&gt;—the point where companies must either adapt to software-driven ways of working or fade into irrelevance.&lt;/p&gt;
&lt;p&gt;Companies like &lt;strong&gt;Nokia&lt;/strong&gt; failed, not because they didn’t try to adapt, but because they measured success with flawed metrics—like Agile adoption rates—rather than true value delivery.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Value Streams: The Heart of the Product Model&lt;/h2&gt;
&lt;p&gt;Kersten urges organizations to orient around &lt;strong&gt;value streams&lt;/strong&gt;, not departments or projects. A value stream represents the end-to-end activities required to deliver value to the customer. Rather than shifting people between projects, companies should:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Fund long-lived product teams&lt;/strong&gt;, not one-off projects&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Track flow&lt;/strong&gt;, not just budget and timeline&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Integrate tooling into a seamless Value Stream Network&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;Case Studies and Epiphanies&lt;/h2&gt;
&lt;p&gt;Throughout the book, Kersten shares stories of transformation—both failed and successful. He reflects on lessons from:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The BMW Leipzig plant (a marvel of lean production)&lt;/li&gt;
&lt;li&gt;Nokia’s Agile rollout&lt;/li&gt;
&lt;li&gt;A global bank’s $1B transformation gone sideways&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;He outlines &lt;strong&gt;three epiphanies&lt;/strong&gt; from his journey:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Architecture must align to value stream&lt;/strong&gt;, not the reverse&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Disconnected toolchains destroy productivity&lt;/strong&gt; 3. &lt;strong&gt;Software is not a pipeline—it’s a collaborative network&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;h2&gt;Why Most Digital Transformations Fail&lt;/h2&gt;
&lt;p&gt;Many organizations fail despite well-funded Agile/DevOps efforts. Kersten identifies three core issues:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Focus on &lt;strong&gt;activities&lt;/strong&gt; (e.g., Agile training), not &lt;strong&gt;outcomes&lt;/strong&gt; - Lack of &lt;strong&gt;business–IT alignment&lt;/strong&gt; - Inability to &lt;strong&gt;see or measure value flow&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The solution is not more tooling or process frameworks—it’s a shift in mindset and management logic.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Beyond the Turning Point&lt;/h2&gt;
&lt;p&gt;Kersten finishes with a clear call to action. The Age of Software is here, and those who fail to adapt risk becoming obsolete. But those who embrace product thinking and build connected &lt;strong&gt;Value Stream Networks&lt;/strong&gt; will:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deliver better customer outcomes&lt;/li&gt;
&lt;li&gt;Make smarter investments&lt;/li&gt;
&lt;li&gt;Attract and retain top talent&lt;/li&gt;
&lt;li&gt;Regain competitive advantage&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;TL;DR: Key Takeaways&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;project model is broken&lt;/strong&gt; for modern software delivery.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Flow Framework&lt;/strong&gt; replaces it with measurable, value-centric metrics.&lt;/li&gt;
&lt;li&gt;Shift focus from projects to &lt;strong&gt;products and value streams&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Digital transformation fails without visibility into &lt;strong&gt;Flow&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The Age of Software demands a &lt;strong&gt;new management paradigm&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Inspired by Mik Kersten’s Project to Product. For more, visit &lt;a href=&quot;https://projecttoproduct.org&quot;&gt;projecttoproduct.org&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
</content:encoded><category>Flow Framework</category><category>Product Thinking</category><category>Digital Transformation</category><category>Software Delivery</category><category>Book Reviews</category><author>Steve James</author></item><item><title>What Lean Really Means</title><link>https://stvpj.com/blog/what-lean-really-means/</link><guid isPermaLink="true">https://stvpj.com/blog/what-lean-really-means/</guid><description>A clear and practical look at what Lean really means, why it’s often misunderstood, and how embracing true Lean principles can drive lasting customer value, quality, and continuous improvement.</description><pubDate>Sat, 19 Apr 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;Lean Thinking: What It Is—and What It Isn’t&lt;/h2&gt;
&lt;p&gt;Lean is one of the most misunderstood concepts in modern business. It’s often mistaken for a buzzword, a quick-fix method, or worse—just another cost-cutting exercise.&lt;/p&gt;
&lt;p&gt;Let’s start by setting the record straight.&lt;/p&gt;
&lt;p&gt;Lean is not:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A miracle cure for all business problems&lt;/li&gt;
&lt;li&gt;A synonym for “doing more with less” at any cost&lt;/li&gt;
&lt;li&gt;A new religion, fad, or acronym&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Lean is a long-term, strategic approach to improving how organizations work—one that puts customer value at the centre and treats anything that doesn’t contribute to that value (or the safety, quality, and stability of the organization) as waste.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Lean is a philosophy of continuous improvement that eliminates waste, empowers people, and focuses relentlessly on delivering value to customers.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;h2&gt;The Three Pillars of Lean&lt;/h2&gt;
&lt;p&gt;All credible definitions of Lean align around three core goals:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Deliver better value to customers&lt;/strong&gt; 2. &lt;strong&gt;Do more with less&lt;/strong&gt; 3. &lt;strong&gt;Ensure quality and long-term sustainability are never compromised&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Organizations that embrace Lean behave in consistent, observable ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Everyone understands what customers truly value&lt;/li&gt;
&lt;li&gt;Continuous improvement is part of daily operations&lt;/li&gt;
&lt;li&gt;Respect for people is central&lt;/li&gt;
&lt;li&gt;The organization aligns long-term thinking with short-term actions&lt;/li&gt;
&lt;li&gt;Lean becomes part of the culture—not just a program&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;The Toyota Way: 14 Principles of Lean Excellence&lt;/h2&gt;
&lt;p&gt;Jeffrey Liker’s &lt;em&gt;The Toyota Way&lt;/em&gt; breaks Lean into 14 principles, grouped into four themes:&lt;/p&gt;
&lt;h3&gt;1. Philosophy as a Foundation&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;1. Make decisions based on long-term thinking&lt;/strong&gt;, not short-term gains.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;2. The Right Process Produces the Right Results&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;2. Create continuous flow&lt;/strong&gt; to surface problems&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;3. Use pull systems&lt;/strong&gt; to avoid overproduction&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;4. Level out workloads&lt;/strong&gt; (heijunka)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;5. Stop to fix problems&lt;/strong&gt; and get quality right the first time&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;6. Standardize tasks&lt;/strong&gt; as the basis for improvement and empowerment&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;7. Use visual controls&lt;/strong&gt; so no problems stay hidden&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;8. Use only reliable, thoroughly tested technology&lt;/strong&gt; that supports people&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;3. Add Value by Developing People and Partners&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;9. Grow leaders&lt;/strong&gt; who understand and teach the philosophy&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;10. Build great teams&lt;/strong&gt; that align with company goals&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;11. Respect partners and suppliers&lt;/strong&gt;—help them improve too&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;4. Continuous Problem Solving Drives Learning&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;12. Go see for yourself (genchi genbutsu)&lt;/strong&gt; to understand situations&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;13. Make decisions slowly by consensus&lt;/strong&gt;, implement them quickly&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;14. Become a learning organization&lt;/strong&gt; through reflection and improvement&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;The 8 Wastes of Lean&lt;/h2&gt;
&lt;p&gt;Lean identifies 8 common forms of waste, present in both manufacturing and service sectors:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Waiting&lt;/strong&gt; – Delays between process steps&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Overproduction&lt;/strong&gt; – Doing more than needed&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rework&lt;/strong&gt; – Fixing mistakes or defects&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Motion&lt;/strong&gt; – Unnecessary movement of people&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Transport&lt;/strong&gt; – Unneeded movement of materials or information&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Processing&lt;/strong&gt; – Extra work that doesn’t add value&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Inventory&lt;/strong&gt; – Excess stock or queued activity&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Talent&lt;/strong&gt; – Underusing people&apos;s skills and creativity&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;h2&gt;Final Thoughts: Lean as a Culture, Not a Checklist&lt;/h2&gt;
&lt;p&gt;Lean is not a toolset. It’s a mindset.&lt;br /&gt;
It’s not a one-time transformation—it’s a way of thinking.&lt;/p&gt;
&lt;p&gt;It asks us to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Empower people&lt;/li&gt;
&lt;li&gt;Eliminate waste&lt;/li&gt;
&lt;li&gt;Pursue purpose&lt;/li&gt;
&lt;li&gt;Continuously improve&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It’s hard work. But for organizations willing to commit, Lean offers a path to more meaningful value, healthier teams, and resilient long-term success.&lt;/p&gt;
</content:encoded><category>Lean</category><category>Continuous Improvement</category><category>Waste</category><category>Kaizen</category><category>Toyota Way</category><category>Process Improvement</category><category>Business Strategy</category><category>Organizational Change</category><author>Steve James</author></item><item><title>Agile Isn’t What You Think</title><link>https://stvpj.com/blog/agile-pitfalls-measuring-value/</link><guid isPermaLink="true">https://stvpj.com/blog/agile-pitfalls-measuring-value/</guid><description>A deep dive into why most Agile implementations miss the point, how focusing on velocity over value undermines true agility, and what it really takes to embrace uncertainty and deliver meaningful outcomes.</description><pubDate>Thu, 20 Mar 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;The Pitfalls of One-Size-Fits-All Agile: Why Most Agile Implementations Miss the Point&lt;/h2&gt;
&lt;p&gt;Agile is everywhere. Or so it seems.&lt;/p&gt;
&lt;p&gt;Walk into nearly any modern software organization and you’ll be told they “do agile.” There are stand-ups. Sprints. Story points. Jira boards. Retros. There&apos;s probably a release train or two. On paper, it looks like agility. But in reality, it&apos;s anything but.&lt;/p&gt;
&lt;p&gt;Despite widespread adoption of agile methodologies and frameworks, most companies still fail to embody what agile truly stands for. What we’re seeing isn’t agile—it’s agile theater. A performance of rituals devoid of the mindset and principles that actually make agile work.&lt;/p&gt;
&lt;p&gt;Why? Because no one wants to talk about the real reason agile fails in big organizations:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;True agility means you won’t know exactly how long something will take, or how much it will cost, before you begin.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That’s the uncomfortable truth. And it&apos;s the one truth that almost no one in a boardroom wants to hear.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Illusion of Predictability&lt;/h2&gt;
&lt;p&gt;Let’s be clear: &lt;strong&gt;agile is not a framework. It’s a mindset.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When the Agile Manifesto was written, it was a response to the rigidity of traditional software development—a world in which long-term plans rarely survived contact with reality. Agile promised something better: flexibility, collaboration, and continuous delivery of value.&lt;/p&gt;
&lt;p&gt;But large organizations have tried to retrofit agile into the very structures it was meant to replace. They’ve imposed predictability on top of adaptability. They&apos;ve institutionalized delivery cadences and backlogs and roadmaps and burndown charts in an effort to make agile feel safe and familiar. In doing so, they&apos;ve fundamentally undermined it.&lt;/p&gt;
&lt;p&gt;Frameworks like SAFe and LeSS may offer a sense of control at scale, but they often do so by compromising on the core agile principle: &lt;strong&gt;embrace uncertainty&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Because at the heart of true agility is a simple reality: &lt;em&gt;you are building something new, and you cannot plan certainty into discovery.&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why Agile Needs Uncertainty&lt;/h2&gt;
&lt;p&gt;The Agile Manifesto begins with individuals and interactions over processes and tools. It emphasizes working software over comprehensive documentation. It encourages customer collaboration over contract negotiation. And above all, it values responding to change over following a plan.&lt;/p&gt;
&lt;p&gt;These are not project management practices. These are philosophical commitments.&lt;/p&gt;
&lt;p&gt;And if you take them seriously, then it follows that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You &lt;strong&gt;cannot guarantee fixed scopes, timelines, or budgets&lt;/strong&gt; up front.&lt;/li&gt;
&lt;li&gt;You &lt;strong&gt;will learn things mid-flight&lt;/strong&gt; that will force you to change direction.&lt;/li&gt;
&lt;li&gt;You &lt;strong&gt;must give teams the autonomy&lt;/strong&gt; to solve problems in ways that aren&apos;t fully spec’d before they begin.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is what makes agile work—and what makes it so uncomfortable for command-and-control management styles.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Velocity Is Not Value&lt;/h2&gt;
&lt;p&gt;In the absence of real agility, organizations default to what they can measure. And the easiest thing to measure is &lt;strong&gt;velocity&lt;/strong&gt;—how many story points a team completes per sprint. But velocity is a local optimization. It tells you how &lt;em&gt;busy&lt;/em&gt; your team is, not how &lt;em&gt;effective&lt;/em&gt; they are.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Velocity is about output. Agile is about outcomes.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;You can double your team’s velocity and still build something no one wants. You can hit every sprint goal and still miss the market. You can have high throughput and zero impact.&lt;/p&gt;
&lt;p&gt;This is where most agile transformations break down: they prioritize activity over value.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;So, What Should We Measure?&lt;/h2&gt;
&lt;p&gt;If agile is about delivering value, then &lt;strong&gt;value&lt;/strong&gt; must be what we measure.&lt;/p&gt;
&lt;p&gt;But value isn’t a simple thing. It’s not a number you pull out of Jira. It’s not a burndown chart or a feature count.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Value is what helps your organization succeed.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It shows up in:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Increased revenue&lt;/li&gt;
&lt;li&gt;Reduced customer churn&lt;/li&gt;
&lt;li&gt;Improved user satisfaction&lt;/li&gt;
&lt;li&gt;Greater market share&lt;/li&gt;
&lt;li&gt;Stronger engagement or retention&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are &lt;strong&gt;lagging indicators&lt;/strong&gt; of success—they tell you after the fact whether what you built actually made a difference. But because they lag, they’re not always useful for day-to-day decision-making.&lt;/p&gt;
&lt;p&gt;That’s where &lt;strong&gt;proxy metrics&lt;/strong&gt; come in.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Proxy Metrics: Aligning Toward Impact&lt;/h2&gt;
&lt;p&gt;To make agile actionable in the short term, teams need &lt;strong&gt;leading indicators&lt;/strong&gt; that signal whether they’re moving in the right direction.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Engineering&lt;/strong&gt; teams focus on delivery health:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cycle time&lt;/li&gt;
&lt;li&gt;Deployment frequency&lt;/li&gt;
&lt;li&gt;Lead time for changes&lt;/li&gt;
&lt;li&gt;Change failure rate&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These help identify bottlenecks, inefficiencies, and delivery risks. They don’t measure value directly—but they indicate how reliably the team can deliver value when it’s found.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Product&lt;/strong&gt; teams focus on signals of product-market fit:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Activation rates&lt;/li&gt;
&lt;li&gt;Feature adoption&lt;/li&gt;
&lt;li&gt;Retention curves&lt;/li&gt;
&lt;li&gt;Customer satisfaction scores&lt;/li&gt;
&lt;li&gt;Net Promoter Scores (NPS)&lt;/li&gt;
&lt;li&gt;Task success rates&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These help gauge whether users are finding what’s being built useful and valuable.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The real power comes from aligning these two sets of indicators: ensuring that &lt;strong&gt;delivery teams are enabled to move fast and safely&lt;/strong&gt;, while &lt;strong&gt;product teams are focused on the highest-leverage problems.&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Efficiency vs. Effectiveness: Know the Difference&lt;/h2&gt;
&lt;p&gt;Here’s the critical distinction that many agile teams blur:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Delivery teams&lt;/strong&gt; should be optimized for &lt;strong&gt;efficiency&lt;/strong&gt;—to execute with speed, quality, and stability.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Product teams&lt;/strong&gt; should be optimized for &lt;strong&gt;effectiveness&lt;/strong&gt;—to ensure that what gets delivered actually matters.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you optimize only for speed, you risk building the wrong thing faster.&lt;br /&gt;
If you optimize only for insight, you risk discovering great opportunities too slowly.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Agile is about continuously improving both sides of that equation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating a &lt;strong&gt;clear separation of concerns&lt;/strong&gt; between prioritization and execution.&lt;/li&gt;
&lt;li&gt;Funding and structuring teams around &lt;strong&gt;outcomes, not projects&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Giving product and delivery teams &lt;strong&gt;joint ownership of goals&lt;/strong&gt;, but distinct accountability for &lt;strong&gt;what&lt;/strong&gt; gets done and &lt;strong&gt;how&lt;/strong&gt; it gets done.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;Agility Without Illusions&lt;/h2&gt;
&lt;p&gt;The real tragedy of faux-agile is that it gives organizations the illusion of adaptability without requiring any of the discipline or humility that true agility demands.&lt;/p&gt;
&lt;p&gt;Agile is not about adopting a framework. It’s about cultivating a mindset:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One that values &lt;strong&gt;learning over knowing&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;One that embraces &lt;strong&gt;change over control&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;One that prioritizes &lt;strong&gt;customer outcomes over internal process fidelity&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When you strip away the jargon, the tickets, and the tooling, this is the real work:&lt;br /&gt;
→ &lt;strong&gt;Helping organizations get comfortable with uncertainty.&lt;/strong&gt; → &lt;strong&gt;Focusing teams on value, not velocity.&lt;/strong&gt; → &lt;strong&gt;And measuring success not by how much you deliver—but by how much it matters.&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Final Thought&lt;/h2&gt;
&lt;p&gt;If your agile process gives you the illusion of predictability but none of the adaptability, it’s not agile. It’s a façade.&lt;/p&gt;
&lt;p&gt;And if your stakeholders expect certainty in timelines and cost before any real discovery has happened, they’re not signing up for agile—they’re asking for waterfall with daily standups.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The question isn’t “are we agile?”&lt;br /&gt;
The real question is: &lt;strong&gt;do we have the courage to let go of control in pursuit of value?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
</content:encoded><category>Agile</category><category>Product Management</category><category>Value</category><category>Velocity</category><category>Product Strategy</category><author>Steve James</author></item><item><title>Why Product Managers Should Trust Their Gut When Data Falls Short</title><link>https://stvpj.com/blog/product-lack-of-data/</link><guid isPermaLink="true">https://stvpj.com/blog/product-lack-of-data/</guid><description>Why great Product Managers know when to trust their intuition—especially when data is incomplete, unavailable, or unable to guide breakthrough innovation.</description><pubDate>Fri, 14 Feb 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As Product Managers, we’re often told to “let the data speak” and to rely on hard evidence to validate our hypotheses. And rightly so—data is a powerful tool for minimizing risk and understanding user behavior. However, there’s a vital piece of the product innovation puzzle that is often overlooked: intuition. When the right data isn’t available or doesn’t exist, your gut can be your most valuable ally.&lt;/p&gt;
&lt;p&gt;In fact, some of the most transformative innovations we celebrate today were born in moments when data was either nonexistent or irrelevant. Let’s explore why Product Managers shouldn’t shy away from trusting their instincts when navigating uncharted waters.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Data Is a Compass, Not a Crystal Ball&lt;/h2&gt;
&lt;p&gt;Marty Cagan, author of &lt;em&gt;Inspired&lt;/em&gt;, emphasizes the limitations of relying solely on data. He writes, &lt;em&gt;“Data is essential, but it only tells you about the past. If you want to invent the future, you have to look beyond the data.”&lt;/em&gt; Data can show us what has worked before, but it rarely points to what will work in the future, especially when you’re venturing into unexplored territory.&lt;/p&gt;
&lt;p&gt;When a Product Manager tries to innovate in a space that’s new or undefined, historical data often falls short. Imagine being the first to propose something like the iPhone. What data would have validated that hypothesis? None. Yet, Steve Jobs famously said, &lt;em&gt;“People don’t know what they want until you show it to them.”&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Visionary Thinking Often Precedes Data&lt;/h2&gt;
&lt;p&gt;Some of the most game-changing innovations weren’t built on data—they were built on a deep understanding of human needs and the courage to make bold decisions. Steve Jobs’ intuition about user experience led to products that redefined industries. Had Apple waited for market research to prove demand, the iPhone might never have happened.&lt;/p&gt;
&lt;p&gt;Similarly, Pawel Huryn, product leader and author, argues that &lt;em&gt;“great product managers are great storytellers—they imagine a better world and then find a way to make it real.”&lt;/em&gt; In the early stages of product development, storytelling and vision often fill the gaps where data cannot.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;When to Trust Your Gut&lt;/h2&gt;
&lt;p&gt;Trusting your gut doesn’t mean ignoring data—it means knowing when it isn’t enough. Here are some scenarios where intuition can be invaluable:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;When entering uncharted markets:&lt;/strong&gt; No historical data exists for truly innovative ideas.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;When experimenting with new paradigms:&lt;/strong&gt; Early-stage products often lack the metrics to guide decisions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;When responding to qualitative insights:&lt;/strong&gt; User interviews and anecdotal evidence sometimes reveal truths that numbers can’t.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Pawel Huryn puts it succinctly: &lt;em&gt;“Your gut feeling is your subconscious mind processing years of experience, knowledge, and observations. It’s not irrational; it’s informed.”&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Balancing Data and Intuition&lt;/h2&gt;
&lt;p&gt;The key is to strike a balance. Use data when it’s available and relevant, but don’t let the absence of perfect data paralyze you. Great Product Managers are not just analysts—they’re visionaries who can navigate ambiguity and take calculated risks.&lt;/p&gt;
&lt;p&gt;As Marty Cagan reminds us, &lt;em&gt;“At the end of the day, your job is to solve problems in a way that creates value for your customers and your company. Don’t let data become an excuse for inaction.”&lt;/em&gt;&lt;/p&gt;
</content:encoded><category>Product Thinking</category><category>Decision Making</category><category>Intuition</category><category>Data</category><category>Product Management</category><category>PM Tips</category><author>Steve James</author></item></channel></rss>