Stakeholder Review — AI-Powered Lead Qualification Chat
Document type: Stakeholder review
PRD reviewed: Product Requirements Document v1.0
Review date: April 2026
Status: Pending sign-off
Audience: Leadership / Commercial stakeholders
Overall Position
The case for building is sound: the commercial rationale is clear, the helpfulness-first approach is the right instinct, and the scope is appropriately constrained for an MVP. However, eight concerns must be addressed before sign-off — not to block the build, but to ensure the company is not exposed to avoidable commercial, legal, and reputational risks. These concerns are raised from a business perspective and require explicit answers, not engineering ones.
Concerns
1 — Success Metrics Lack a Minimum Acceptable Bar
The go/no-go decision at 90 days is based on “chat leads reach sales call at higher rate than form leads.” There is no stated minimum improvement that makes the investment worthwhile. A 1% improvement technically passes this metric. Before launch, the commercial team should define the minimum bar — for example, chat must produce at least 2× the qualification rate of forms to justify ongoing operation. Without this, the 90-day review becomes a subjective debate rather than a clear business decision.
The 15% contact capture rate target also appears without justification. Is it derived from industry benchmarks, the current form submission rate, or estimated from persona volume? Without context it is impossible to know whether 15% is ambitious, conservative, or arbitrary.
2 — The Baseline Must Be Measured Before Launch, Not After
The PRD lists OQ-03 (current form submission volume and qualification rate) as a stakeholder-owned open question with “before launch” as the needed-by date. This is the wrong sequencing.
If the baseline is not captured before the chat goes live, any change in traffic patterns, seasonality, or sales team behaviour during the 90-day window will contaminate the comparison. The baseline measurement must happen now — before any change is made to the website — or the 90-day go/no-go review will have nothing reliable to compare against.
3 — Sales Team Capacity Commitment Is Not Confirmed
The < 5-minute hot lead response time assumes the sales team will respond to a Slack ping within 5 minutes during business hours. The PRD itself notes the current baseline is 6–8 hours for first real human response. Closing that gap from 6–8 hours to 5 minutes is not a technology problem — it is an operations and staffing commitment.
Before the system is built to promise < 5 minutes to prospects, the commercial team must confirm:
- Who is on hot lead duty at any given time during CET hours?
- What is the coverage plan for high-volume days (Mondays, post-holiday)?
- What happens operationally when the response time is missed?
A missed commitment delivered by an automated system damages trust more than no commitment at all. If the sales team cannot reliably meet this SLA, the < 5-minute promise must be removed from the product before launch.
4 — The Calendly 2-Day Minimum May Lose Hot Leads to Competitors
A hot lead — by the PRD’s own definition — is a visitor with confirmed urgency signals who is actively evaluating vendors. Routing them to a calendar with a mandatory two-day wait before any call can be booked risks losing them to a competitor who responds the same day.
The rationale for the two-day buffer is that it gives the assigned sales rep time to review the context packet before the call. This is a valid internal process concern, but it must be weighed explicitly against conversion risk. The sales team should confirm they accept this trade-off. The two-day minimum should be treated as a hypothesis to be validated, not a fixed policy — if early data shows drop-off at the Calendly step, it must be revisited immediately.
5 — The “Proof Point” Claim Is a Reputational Liability if Execution Falls Short
Section 2.2 states that a well-executed AI chat is “itself a proof point” that the company builds AI systems that work in production. This is accurate — and it is equally true that a poorly-executed chat is a damaging counter-signal. A company that sells AI engineering expertise cannot afford to have a visibly broken or frustrating chat as the first thing a technical buyer encounters.
The PRD does not define what “well-executed” means from a reputation perspective, and it contains no rollback plan. Before launch, the company needs:
- A clear definition of what quality failures (e.g. hallucinations about client work, offensive responses, persistent errors) would trigger taking the chat offline.
- A named person with the authority to pull the chat quickly if needed.
- A plan for communicating to prospects who had poor experiences.
6 — Client Confidentiality in the Knowledge Base Is Unaddressed
The RAG knowledge base will include “all available company case studies.” Some case studies reference specific client names, project outcomes, or proprietary technical details under commercial confidentiality agreements. An AI system generating responses from this content may surface client-specific information in ways that were not anticipated when the case studies were originally written — for example, in response to a competitor’s probe.
Before any content is ingested into the knowledge base, every case study must be reviewed against its underlying client agreement. This is not just a content quality task — it is a legal and commercial obligation. The review should be signed off by whoever owns client relationships, not delegated to marketing alone.
7 — The GDPR Data Notice Is Underspecified
The PRD requires “a brief data notice on first interaction” but does not specify what it must contain. For a system that collects personal data, processes conversation history via third-party LLM APIs, and retains records for 90 days, the minimum compliant notice must inform visitors:
- That they are interacting with an AI system, not a human.
- That their conversation is stored and may be processed by third-party services.
- That they have the right to request deletion of their data.
- How to exercise that right.
A generic cookie notice does not satisfy these requirements. The specific wording must be reviewed and approved by legal before the chat goes live. This is a compliance gate, not a design preference.
8 — No Budget or Cost Estimate
The PRD defines feature scope, requirements, and technical candidates but provides no indication of build cost, ongoing infrastructure cost, or LLM API cost at expected conversation volume. Leadership cannot give meaningful approval to a scope document without understanding the investment required.
Before sign-off is requested, the team should provide at minimum a rough order of magnitude covering:
- Engineering time to build the MVP (person-weeks).
- Monthly infrastructure cost at expected conversation volume.
- LLM API cost per 1,000 conversations under each provider candidate.
- Ongoing maintenance and content update cost.
Conditions for Sign-Off
- Minimum acceptable improvement threshold defined for the 90-day go/no-go (not just “higher than forms”)
- 15% contact capture rate target justified with source or benchmark
- OQ-03 baseline captured before the chat is deployed to the website
- Sales team confirms hot lead duty coverage and formally commits to < 5 min response SLA
- Calendly 2-day minimum accepted by sales team with explicit acknowledgement of drop-off risk
- Rollback criteria defined: what triggers taking the chat offline, named decision-maker
- All case studies reviewed against client agreements before knowledge base ingestion
- GDPR data notice wording drafted and approved by legal
- Rough order of magnitude cost estimate provided to leadership
This stakeholder review is a business-level assessment of PRD v1.0. Engineering concerns are addressed separately in the Engineering Review. Both reviews must reach sign-off before development begins.