1. Context & Business Problem
Solve the adoption failure of an AI Answer Assistant by shifting the product strategy from technical accuracy to user-perceived trustworthiness.
Fortude's Charlie AI Answer Assistant was designed to help Infor M3 ERP users query complex data through natural language. The product was technically functional, but early adoption was stagnant. Users were not abandoning it because it lacked features — they were abandoning it because they didn't trust its responses. When an AI assistant confidently delivers a wrong answer in an enterprise context, it doesn't get a second chance.
2. My Role & Ownership
I owned the product strategy for Charlie AI, working directly with the engineering team to redefine what 'good' looked like for the product. I facilitated discovery sessions with early adopters, defined the new validation architecture, and drove the roadmap from the strategy pivot through to a standalone Insight Assistant built on Charlie's architecture.
3. Constraints & Trade-offs
No ability to retrain the underlying model — the pivot had to be achievable through product-layer changes, not ML infrastructure work. Enterprise customers have zero tolerance for hallucinations, meaning even a small error rate was catastrophic for trust. We had limited early adopters for testing, requiring every iteration to be deliberate and well-measured.
4. Discovery & Key Insights
User interviews with early adopters revealed a clear pattern: it wasn't that Charlie was frequently wrong — it was that users couldn't tell when to trust it. An answer delivered with identical confidence regardless of data quality created anxiety. The insight: trust is a product feature, not an engineering metric. Users needed clarity about *why* a response was given, not just *what* the response was.
5. Key Decisions
Shifted the product strategy from "maximize accuracy" to "maximize response clarity". The core decision was to implement a validation agent that checks data credibility before any response is delivered to the end user. Responses are only surfaced when the agent can confirm the supporting data is reliable — if not, the system explicitly flags the uncertainty rather than guessing. This was a deliberate trade-off: slightly fewer answers, but dramatically higher trust.
6. Execution Snapshot
Worked with engineering to design and deploy the credibility validation agent as a pre-response layer in Charlie's architecture. Redesigned the response format to surface source context alongside answers, giving users auditability. Iterated the feature with the early adopter cohort using usage frequency and session depth as trust proxies. Once adoption stabilized, ideated and structured a roadmap to launch a standalone Insight Assistant tool leveraging Charlie's architecture — positioning it as a general-purpose enterprise knowledge layer.
7. Outcomes & Impact
Achieved a 75% increase in usage among early adopters following the strategy pivot and agent deployment. Signed up Toyota Sri Lanka to evaluate the standalone Insight Assistant for automating their manual service job tracking processes. Created a clear pathway to extend Charlie's architecture into new verticals beyond Infor M3, establishing a platform strategy rather than a single-product roadmap.
8. Learnings & What I'd Do Differently
The decision to slow down responses in exchange for higher confidence was initially controversial internally — but it was exactly the right call. Enterprise users would rather wait for a trustworthy answer than get an instant uncertain one.
I would involve customer success teams earlier in the discovery process — they had the most direct signal on where user trust was breaking down.