Open source · ONNX · Runs anywhere

Speak with intent.
Train custom wake words. Run anywhere.

Custom wake words and fixed voice commands — in a single on-device model. Train "hey kitchen" to activate, "next step" to navigate, "stop timer" to control. No speech-to-text needed. First model free.

Voice control shouldn't cost $6K/year.

You want to add voice activation to your project — a smart home setup, a mobile app, or a Raspberry Pi. But the only options are enterprise SDKs with per-user pricing that kills your margins — or research projects you have to wrangle yourself.

Outspoken fills the gap. Train a custom wake word through our web UI, download a standard ONNX model, and deploy anywhere. Credit-based pricing. No subscriptions. No per-user fees. First model free.

The current landscape

Picovoice PorcupinePer-user pricing, no indie tier
$6,000/yr
DaVoice.io"Email us for pricing"
???
openWakeWordPython only, DIY training pipeline
Free but…
OutspokenUniversal ONNX, self-service training
1 credit
Integrations

One model. Every platform.

Your trained ONNX model works everywhere ONNX Runtime runs. No vendor lock-in. No proprietary formats.

Available

Home Assistant

openWakeWord models work natively in Home Assistant's voice pipeline. Train your wake word, drop it in, done.

Compatible with HA 2024.x+

View HA docs →Setup guide →
Available

Python

Load your .onnx model with the openwakeword library. Raspberry Pi, edge servers, custom hardware.

pip install openwakeword

ONNX Runtime for Python →Python guide →
Available

React Native

Use onnxruntime-react-native for on-device wake word detection on iOS and Android. No cloud calls.

npm install onnxruntime-react-native

onnxruntime-react-native docs →React Native guide →
Available

Web / Browser

Run models with onnxruntime-web. Test in our dashboard or integrate into your own web app.

Already working in the Outspoken dashboard

onnxruntime-web docs →
Available

C++ / C# / Java

ONNX Runtime has official bindings for native apps, game engines, and embedded systems.

Standard ONNX format, no proprietary dependencies

ONNX Runtime bindings →
Coming Soon

Expo

Dedicated Expo module with useWakeWord hook, config plugin, and prebuild support.

expo-outspoken

All models use the standard ONNX format. ~700KB–2MB per model. No proprietary runtime. No vendor lock-in.

Capabilities

Wake words. Speak with intent.
One platform.

Not just "Hey Siri" clones. Outspoken gives you activation words and intent detection in a single model — on-device, private, no speech recognition needed.

🎙️

Wake Words

Custom activation phrases like "Hey Chef" or "OK Buddy". Always listening, near-zero false positives. Train your own in minutes.

Speak with Intent

Fixed command detection — "next step", "go back", "start timer". Each intent adds less than 1ms of processing. No speech-to-text needed.

🔒

100% On-Device

Models run locally via ONNX Runtime. No audio leaves the device. No server calls. No latency. Works offline.

🌍

Multi-Language

Train in English, Dutch, German, and French — with more coming. No per-language fees. Same credit cost for every language.

📦

Universal ONNX

Standard .onnx models (700KB–2MB) that run on any ONNX Runtime. Python, React Native, C++, browsers, Home Assistant — no vendor lock-in.

🧪

Self-Service Training

Train custom models through our web UI. Upload nothing — we synthesize training data. Your model is ready in ~45 minutes.

Perfect for

Custom wake words for Home Assistant
Voice-activated IoT and Raspberry Pi
Hands-free mobile app navigation
Accessibility for limited mobility
Voice commands in games
Kiosk and retail voice interfaces
Privacy-focused products
Robotics wake words and triggers

Think of it like a button, but activated by voice. Triggers must be predefined — this is not transcription or conversational AI.

Supported languages

+ more coming
How it works

Train. Download. Ship.

No ML expertise needed. No audio samples to collect. Go from idea to working voice control in under an hour.

01

Pick your triggers

Choose a wake word and any intents. Type them in — any language, any phrase.

02

We train the model

Synthetic voice data, noise augmentation, negative mining. Done in ~45 minutes.

03

Download .onnx

Lightweight models (700KB–2MB). Bundle in your app or load dynamically.

04

Deploy anywhere

Load in Home Assistant, Python, React Native, or the browser. Your model, your platform.

Want to understand the training parameters? Read the training guide →

Pricing

Pay per model. Not per month.

Buy credits, train what you need, keep it forever. No subscriptions, no per-user fees, no recurring costs. 1 credit = €1.

Open Source
Prototype & non-commercial
Free

No credit card needed

  • ONNX model downloads
  • Pre-trained English models
  • Community intent models
  • Non-commercial use
  • Custom model training
  • Multi-language
Get started
Most popular
Pay as you go
Train custom models, ship commercially
1 credit/ model

1 credit = €1 · 1 free credit on signup · never expire

  • Everything in Open Source
  • Custom wake words & intents
  • EN, NL, DE, FR (more coming)
  • Commercial use
  • Self-service training UI
  • Models are yours forever
Start free — 1 credit included

All models run on-device. Zero ongoing costs after training. No per-user fees. Audio never leaves the device.

Training presets
Fast
0.75 credits
€0.75
Balanced
1 credit
€1
Premium
1.5 credits
€1.50
vs. Picovoice
€6,000/yr

Say the word.

Train a custom wake word for 1 credit. Standard ONNX format. Deploy anywhere. Keep it forever. First credit free.

Works with:Home Assistant·Python·React Native·Web·C++·and more