What Building an AI Bug Assistant Taught Me About Product Management
- Jarrod Dickerson

- Sep 20, 2025
- 3 min read

“A product manager builds the bridge between problem and solution, making sure what gets built solves something people really care about.”
That’s the kind of PM I aim to be. And building the AI Bug Assistant was my opportunity to live that.
Why This Project Even Mattered
In my time working with engineering and support teams, I kept hearing the same refrain:
“I know we fixed something like this before — just can’t find it now.”
IT teams were manually searching through archives, repeating solutions, wasting hours. What should have been simple knowledge retrieval felt like detective work. I saw an opening: what if we could make finding past bug fixes as easy as asking a question?
That question led to this project.
What I Did — In Plain English
Here’s how I turned that problem into a working solution:
Listened — spoke to support engineers about how they search for fixes; what keywords they try; where they get stuck.
Mapped the data landscape — found where ticket logs live, how clean (or messy) they were, and what tools were available.
Chose my tools:
Google Cloud: for scalable infrastructure.
Cloud SQL (PostgreSQL): where ticket history lives.
Vertex AI + embeddings (text-embeddings-005): for semantic search, not just keyword match.
Gemini 2.5 Flash: for generating human-friendly summaries of what was found.
Built a prototype on Cloud Run: a web UI where someone could type a natural-language question (e.g. “How was error 504 resolved in August?”) and get a summary.
Iterated quickly — cleaned up data, tuned embeddings, fixed edge cases where responses were off.
What Worked Really Well
These were the wins — the things that surprised me in a good way:
Users could go from “I’ll search for 20 minutes” to “I got an answer in 10 seconds”. That shift was what people noticed first.
The quality of responses was way better when using RAG + embeddings vs. trying to match keywords. It captured “semantic similarity” (like different phrasing) reliably.
Because everything lived in GCP and used managed services (Cloud SQL, Vertex AI, Cloud Run), scaling up or down felt less scary.
What I Had to Learn, the Hard Way
It wasn’t all smooth. These are the lessons that came with some bruises:
Data cleanliness matters. If past ticket logs are messy (typos, missing fields, inconsistent formats), embeddings struggle. Fixing those issues eats time.
Cost vs. speed vs. accuracy is a triangle I kept pulling on. More embedding vectors or larger models = better quality, but also higher latency / cost.
Expectation setting is everything. During demos, people expect “instantly perfect” — but real life has ambiguous or partial info. Being clear about what the assistant can and can’t do helped a lot.
What This Says I Can Do as a PM
If you bring me into your team, here’s what this project proves about how I work:
I don’t just chase “cool tech”; I chase problems people feel (e.g. wasted time, repeated fixes).
I can map technical components (cloud infra, embeddings, summarization) AND tie them to outcomes (faster resolution, reduced team frustration).
I iterate: build a working version, gather feedback, adjust.
I’m comfortable balancing trade-offs (speed, cost, accuracy), and communicating them to both engineers and stakeholders.
What’s Next — Because This Isn’t Done
Because no product is ever “done”, here are where I’d push this further:
Make the assistant usable in places support people already are: Slack, Microsoft Teams, Jira.
Build a dashboard to measure time saved, usage frequency, accuracy of answers vs. expectations.
Handle more types of inputs — logs, error screenshots, maybe even voice.
Set up feedback loops so users can correct or clarify when the agent gives a partial/misleading answer, to improve quality over time.
Final Thought
One of my favorite things about product management is seeing a small change in workflow turn into something that shifts how people work. With the AI Bug Assistant, I saw that shift — from hunting through logs, to simply asking, trusting the answer, and moving on.
That’s what I want to bring to the next team: building with empathy, shipping with impact, and learning constantly.



Comments