hacker news Hacker News
  1. new
  2. show
  3. ask
  4. jobs
I've spent the past few years building 50+ AI agents in prod (some reached 1M+ sessions/day), and the hardest part was never building them — it was figuring out why they fail.

AI agents don't crash. They just quietly give wrong answers. You end up scrolling through traces one by one, trying to find a pattern across hundreds of sessions.

Kelet automates that investigation. Here's how it works:

1. You connect your traces and signals (user feedback, edits, clicks, sentiment, LLM-as-a-judge, etc.) 2. Kelet processes those signals and extracts facts about each session 3. It forms hypotheses about what went wrong in each case 4. It clusters similar hypotheses across sessions and investigates them together 5. It surfaces a root cause with a suggested fix you can review and apply

The key insight: individual session failures look random. But when you cluster the hypotheses, failure patterns emerge.

The fastest way to integrate is through the Kelet Skill for coding agents — it scans your codebase, discovers where signals should be collected, and sets everything up for you. There are also Python and TypeScript SDKs if you prefer manual setup.

It’s currently free during beta. No credit card required. Docs: https://kelet.ai/docs/

I'd love feedback on the approach, especially from anyone running agents in prod. Does automating the manual error analysis sound right?

loading...