Voice AI Assistant for Medicare Patients
Building an accessible voice-driven care coordination device for elderly patients
The Challenge
A healthcare company needed to develop an assistant device for Medicare patients—similar to Alexa but purpose-built for healthcare. The device needed to coordinate care, provide entertainment, and enable natural voice interactions for elderly users on iPads. The challenge was building a production-ready MVP that could handle real-time voice conversations with low latency while being accessible to seniors.
The Solution
We built a voice-enabled care assistant using LiveKit Agents for real-time voice processing, with Deepgram for speech-to-text and Cartesia for natural-sounding text-to-speech. The frontend runs as a Next.js web app optimized for iPads, supporting both touch and voice interactions. The Python backend orchestrates the AI pipeline and integrates with healthcare systems. We integrated Langfuse for full observability across the voice pipeline—tracing every conversation, measuring latency at each step, and monitoring costs. LiveKit accelerated our development significantly—the framework's well-designed abstractions and excellent documentation enabled us to deploy the first working version for customers in just 2 weeks. Throughout development, we engaged with the LiveKit community: opening issues, asking for support, and even contributing back to the repository. The community and core contributors were responsive and helpful, making the integration smooth.
Results
- First working MVP deployed to customers in 2 weeks
- Sub-500ms voice response latency achieved
- Touch and voice multimodal interface for accessibility
- Interactive games for patient engagement
- Full pipeline observability with Langfuse tracing
Technologies Used
Verified by Clutch
This case study has been independently verified by Clutch.