NLP vs. LLMs — BoostDraft's Deliberate Choice
NLP and LLMs are both language AI — but they operate very differently. BoostDraft chose NLP because it's deterministic, auditable, and safe for legal use.
What NLP actually means
Natural Language Processing (NLP) is a broad field of AI that deals with understanding and manipulating text. It includes tasks like:
- Tokenization — splitting text into words or subwords - Named entity recognition (NER) — identifying 'Acme Corp' as a company, '$5 million' as a monetary value - Part-of-speech tagging — knowing that 'shall' is a modal verb with legal significance - Text classification — categorizing a clause as an indemnification clause vs. a liability limitation - Grammar checking — detecting inconsistencies in numbering, cross-references, or defined terms
BoostDraft uses these NLP techniques to power features like definition popups, smart formatting, and proofreading.
NLP operates in sente: it proactively scans the document, identifies issues, and presents solutions before the user goes looking. The user is always responding to the AI's moves, not the other way around.
What LLMs are and why they're different
Large Language Models (LLMs) like GPT-4 are trained on enormous text corpora and can generate fluent, contextually-aware text. They're remarkable — but they have fundamental characteristics that make them risky for legal documents:
1. Hallucination — LLMs confidently generate plausible-sounding but factually wrong content. In a contract, a hallucinated clause could create unintended legal obligations.
2. Non-determinism — The same prompt can produce different outputs each time. This is great for creative tasks but catastrophic for a legal document where every word matters.
3. Opacity — You cannot explain exactly why an LLM produced a specific output. An NLP rule-based system can: 'the cross-reference in paragraph 4.2(b) points to a deleted section'.
4. Privacy risk — Cloud-based LLMs require sending your document to an external server. BoostDraft's product runs locally; an LLM integration would break this.
The product PM insight: deliberate restraint
Here's the key insight for your interview: BoostDraft's decision not to use LLMs is not a technical limitation — it's a product strategy.
By choosing NLP, they can honestly tell enterprise legal customers: 'Our AI never hallucinates. It will catch errors, but it will never invent new ones. Every suggestion is auditable.'
This is a trust moat. Generative AI competitors might offer more impressive demos, but BoostDraft's reliability guarantee is worth more in a legal context.
An AI PM here needs to protect this moat: evaluating any future feature proposal against the question 'does this compromise our reliability promise?'