
Test, monitor, and improve your voice agents
Roark helps teams test, monitor, and improve their voice agents. Voice AI is exploding, but reliability is still the #1 challenge. Teams spend hours manually testing agents and still miss failures. In the last 6 months, Roark has processed over 10M minutes of calls, powering monitoring and simulation for teams across the Voice AI ecosystem (including YC companies). We close that gap by combining: 📊 Monitoring & Evaluation - 40+ built-in metrics (latency, instruction-follow, sentiment, etc.), custom dashboards, alerts, and the ability to define your own. Up to 15 speakers with automatic speaker identification. 🎭 Simulations & Personas - end-to-end phone/WebSocket simulations for inbound and outbound agents, with configurable personas (accents, languages, speech & behavior profiles). Define tests as conversations with a graph-based approach, so you can easily branch into edge cases and variants. 🔁 Full lifecycle loop - failed calls automatically become repeatable tests, so every failure makes your agent stronger. Roark is the missing QA layer for Voice AI - helping teams ship agents they can trust.