Custom wake words and fixed voice commands — in a single on-device model. Train "hey kitchen" to activate, "next step" to navigate, "stop timer" to control. No speech-to-text needed. First model free.
You want to add voice activation to your project — a smart home setup, a mobile app, or a Raspberry Pi. But the only options are enterprise SDKs with per-user pricing that kills your margins — or research projects you have to wrangle yourself.
Outspoken fills the gap. Train a custom wake word through our web UI, download a standard ONNX model, and deploy anywhere. Credit-based pricing. No subscriptions. No per-user fees. First model free.
Your trained ONNX model works everywhere ONNX Runtime runs. No vendor lock-in. No proprietary formats.
openWakeWord models work natively in Home Assistant's voice pipeline. Train your wake word, drop it in, done.
Compatible with HA 2024.x+
View HA docs →Setup guide →Load your .onnx model with the openwakeword library. Raspberry Pi, edge servers, custom hardware.
pip install openwakeword
ONNX Runtime for Python →Python guide →Use onnxruntime-react-native for on-device wake word detection on iOS and Android. No cloud calls.
npm install onnxruntime-react-native
onnxruntime-react-native docs →React Native guide →Run models with onnxruntime-web. Test in our dashboard or integrate into your own web app.
Already working in the Outspoken dashboard
onnxruntime-web docs →ONNX Runtime has official bindings for native apps, game engines, and embedded systems.
Standard ONNX format, no proprietary dependencies
ONNX Runtime bindings →Dedicated Expo module with useWakeWord hook, config plugin, and prebuild support.
expo-outspoken
All models use the standard ONNX format. ~700KB–2MB per model. No proprietary runtime. No vendor lock-in.
Not just "Hey Siri" clones. Outspoken gives you activation words and intent detection in a single model — on-device, private, no speech recognition needed.
Custom activation phrases like "Hey Chef" or "OK Buddy". Always listening, near-zero false positives. Train your own in minutes.
Fixed command detection — "next step", "go back", "start timer". Each intent adds less than 1ms of processing. No speech-to-text needed.
Models run locally via ONNX Runtime. No audio leaves the device. No server calls. No latency. Works offline.
Train in English, Dutch, German, and French — with more coming. No per-language fees. Same credit cost for every language.
Standard .onnx models (700KB–2MB) that run on any ONNX Runtime. Python, React Native, C++, browsers, Home Assistant — no vendor lock-in.
Train custom models through our web UI. Upload nothing — we synthesize training data. Your model is ready in ~45 minutes.
Think of it like a button, but activated by voice. Triggers must be predefined — this is not transcription or conversational AI.
No ML expertise needed. No audio samples to collect. Go from idea to working voice control in under an hour.
Choose a wake word and any intents. Type them in — any language, any phrase.
Synthetic voice data, noise augmentation, negative mining. Done in ~45 minutes.
Lightweight models (700KB–2MB). Bundle in your app or load dynamically.
Load in Home Assistant, Python, React Native, or the browser. Your model, your platform.
Choose a wake word and any intents. Type them in — any language, any phrase.
Synthetic voice data, noise augmentation, negative mining. Done in ~45 minutes.
Lightweight models (700KB–2MB). Bundle in your app or load dynamically.
Load in Home Assistant, Python, React Native, or the browser. Your model, your platform.
Want to understand the training parameters? Read the training guide →
Buy credits, train what you need, keep it forever. No subscriptions, no per-user fees, no recurring costs. 1 credit = €1.
No credit card needed
1 credit = €1 · 1 free credit on signup · never expire
All models run on-device. Zero ongoing costs after training. No per-user fees. Audio never leaves the device.
Train a custom wake word for 1 credit. Standard ONNX format. Deploy anywhere. Keep it forever. First credit free.