Unit 1/Lesson 2 of 3

The AI Product Lifecycle

AI products have a different build → ship → improve loop. Data and model quality are never 'done' — they iterate continuously, and the PM owns that loop.

SkillsAI lifecycle ownershipIteration frameworksRoadmap thinking for AI
+20 XP

The traditional PM loop doesn't fully apply

Traditional product development: define → design → build → ship → measure → iterate. Clean and linear.

AI products break this loop in two ways:

1. You ship a model, not just a feature. The model is continuously retrained on new data. The product behavior can change between releases without any code change.

2. Measurement is harder. How do you measure whether a smart autocomplete got 'better'? You need specialized evaluation pipelines, not just A/B tests.

Koコウ

In Go, a Ko is a situation that repeats — the same position keeps recurring. AI products can feel like Ko: you keep revisiting the same data quality problems, the same model edge cases. The PM's job is to break the cycle by improving the training data or evaluation process.

The AI PM loop

DefineDataModelEvaluateShipMonitorFeedback → back to Data.

The key additions vs. traditional PM:

- Data step: Before you can build a model, you need labeled training data. The PM often has to define what 'correct' looks like — writing annotation guidelines, working with legal experts to label contract clauses.

- Evaluate step: Does the model meet the quality bar for launch? What's the precision/recall? Who reviews model outputs before users see them?

- Monitor step: After launch, the model encounters real-world inputs it was never trained on. The PM sets up monitoring for failure modes and degradation.

- Feedback loop: User corrections, rejections, and edge cases flow back into the training pipeline. The PM decides what gets prioritized.

What this means at BoostDraft

BoostDraft's proofreading and autocomplete features run on models that need to understand legal language patterns. The PM role involves:

- Working with legal domain experts to define what 'correct' contract language looks like - Defining quality thresholds (e.g., minimum precision before enabling a suggestion) - Designing how user feedback (accepting/rejecting suggestions) feeds back into model improvement - Partnering with engineering to evaluate when NLP is the right tool vs. a simpler rule-based approach

This is what the JD means by 'evaluate AI vs. non-AI feasibility' — not every problem needs a model.

1 / 3

In the AI product lifecycle, what is the PRIMARY role of the 'Monitor' step after launch?