Case Study · 03
Employee Feedback Intelligence Platform
An LLM-based analytics framework that turns 100K+ employee feedback entries a year into operational signal — shortening issue identification from T+1 to T+0.
Role
Lead Product Manager
Timeline
2023 — 2025
Team
PM, Eng, Data, Applied AI, Ops partners
Scope
7 business lines · 100K+ entries / yr
Background
Context
Employee feedback used to arrive as a flood of unstructured text from surveys, internal channels, town halls, and tickets. Reading it was a job; categorizing it was a backlog; acting on it was an afterthought.
We wanted feedback to behave like an operational signal: continuously parsed, mapped to the right owners, and surfaced to policy teams in time to actually change something.
Why it mattered
Problem
Off-the-shelf sentiment models were almost useless on internal language — products, teams, and policies all had names that meant nothing to a generic model. Categories drifted between business lines. And once a topic was extracted, no one trusted it enough to act without re-reading the raw text.
The hardest part wasn't the model. It was the taxonomy, the review loop, and the trust layer around the model.
What I owned
My Role
I led the platform from a prototype Notion dashboard to a production system across seven business lines. I designed the taxonomy framework, the human-in-the-loop review surface, and the workflow that turns clustered feedback into action items for the right operating team — and partnered with applied AI on the model evaluation loop.
How we built it
Solution Design
Unified Feedback Intake
Feedback from multiple channels, including chat groups, Oncall, tickets, and service reports, is unified into a standardized data structure for further analysis.
LLM-Based Understanding & Tagging
LLM is applied to understand user feedback, classify service domains, identify sentiment polarity, infer issue types, and extract key context such as location and affected service areas.
Human-in-the-Loop Model Improvement
Manual review and category correction are used to continuously improve tagging accuracy, allowing the model to learn from real operational feedback and evolving service scenarios.
Actionable Insight & Auto Escalation
The system converts fragmented feedback into structured insights, supports trend monitoring, and triggers responsible teams when potential risks or high-priority issues are detected.
What changed
Before / After
Before
Feedback was read manually, with a one-day lag at minimum before patterns surfaced.
After
Issues are clustered and surfaced at T+0, the same day they're submitted.
Before
Categories were inconsistent across business lines, blocking cross-cutting analysis.
After
A shared, versioned taxonomy let policy teams compare apples to apples.
Before
Analytics output didn't reach the people who could fix anything.
After
Clusters route to owner queues with SLAs, closing the loop from signal to action.
Outcomes
Impact
100K+
Feedback entries processed annually
T+0
Issue identification (down from T+1)
7
Core business lines onboarded
Multiple
Internal policy changes directly informed
What I'd carry forward
Learnings
On real operational data, the model is the easy part. The taxonomy, the review loop, and the routing decide whether anyone trusts the output.
If an AI feature doesn't route to a human queue with an SLA, it's still a dashboard. Dashboards don't change policy.
Letting operating teams own and edit the taxonomy turned the platform from "another analytics tool" into something they defended.