moonshine-ai / moonshine
- понедельник, 16 февраля 2026 г. в 00:00:03
Fast and accurate automatic speech recognition (ASR) for edge devices
Voice Interfaces for Everyone
Moonshine Voice is an open source AI toolkit for developers building real-time voice applications.
Join our community on Discord to get live support.
pip install moonshine-voice
python -m moonshine_voice.mic_transcriber --language enListens to the microphone and prints updates to the transcript as they come in.
python -m moonshine_voice.intent_recognizerListens for user-defined action phrases, like "Turn on the lights", using semantic matching so natural language variations are recognized. For more, check out our "Getting Started" Colab notebook and video.
Download github.com/moonshine-ai/moonshine/releases/latest/download/ios-examples.tar.gz, extract it, and then open the Transcriber/Transcriber.xcodeproj project in Xcode.
Download github.com/moonshine-ai/moonshine/releases/latest/download/android-examples.tar.gz, extract it, and then open the Transcriber folder in Android Studio.
Download or git clone this repository and then run:
cd core
mkdir build
cmake ..
cmake --build .
./moonshine-cpp-testDownload github.com/moonshine-ai/moonshine/releases/latest/download/macos-examples.tar.gz, extract it, and then open the MicTranscription/MicTranscription.xcodeproj project in Xcode.
Download github.com/moonshine-ai/moonshine/releases/latest/download/windows-examples.tar.gz, extract it, and then open the cli-transcriber\cli-transcriber.vcxproj project in Visual Studio.
Install Moonshine in Python for model downloading.
In the terminal:
pip install moonshine-voice
cd examples\windows\cli-transcriber
.\download-lib.bat
msbuild cli-transcriber.sln /p:Configuration=Release /p:Platform=x64
python -m moonshine_voice.download --language en
x64\Release\cli-transcriber.exe --model-path <path from the download command> --model-arch <number from the download command>You'll need a USB microphone plugged in to get audio input, but the Python pip package has been optimized for the Pi, so you can run:
sudo pip install --break-system-packages moonshine-voice
python -m moonshine_voice.mic_transcriber --language enI've recorded a screencast on YouTube to help you get started, and you can also download github.com/moonshine-ai/moonshine/releases/latest/download/raspberry-pi-examples.tar.gz for some fun, Pi-specific examples. The README has information about using a virtual environment for the Python install if you don't want to use --break-system-packages.
TL;DR - When you're working with live speech.
| Model | WER | # Parameters | MacBook Pro | Linux x86 |
|---|---|---|---|---|
| Moonshine Medium Streaming | 6.65% | 245 million | 258ms | 347ms |
| Whisper Large v3 | 7.44% | 1.5 billion | 11,286ms | 16,919ms |
| Moonshine Small Streaming | 7.84% | 123 million | 148ms | 201ms |
| Whisper Small | 8.59% | 244 million | 1940ms | 3,425ms |
| Moonshine Tiny Streaming | 12.00% | 34 million | 50ms | 76ms |
| Whisper Tiny | 12.81% | 39 million | 277ms | 1,141ms |
See benchmarks for how these numbers were measured.
OpenAI's release of their Whisper family of models was a massive step forward for open-source speech to text. They offered a range of sizes, allowing developers to trade off compute and storage space against accuracy to fit their applications. Their biggest models, like Large v3, also gave accuracy scores that were higher than anything available outside of large tech companies like Google or Apple. At Moonshine we were early and enthusiastic adopters of Whisper, and we still remain big fans of the models and the great frameworks like FasterWhisper and others that have been built around them.
However, as we built applications that needed a live voice interface we found we needed features that weren't available through Whisper:
82 languages are listed, but only 33 have sub-20% WER (what we consider usable). For the Base model size commonly used on edge devices, only 5 languages are under 20% WER. Asian languages like Korean and Japanese stand out as the native tongue of large markets with a lot of tech innovation, but Whisper doesn't offer good enough accuracy to use in most applications The proprietary in-house versions of Whisper that are available through OpenAI's cloud API seem to offer better accuracy, but aren't available as open models.
All these limitations drove us to create our own family of models that better meet the needs of live voice interfaces. It took us some time since the combined size of the open speech datasets available is tiny compared to the amount of web-derived text data, but after extensive data-gathering work, we were able to release the first generation of Moonshine models. These removed the fixed-input window limitation along with some other architectural improvements, and gave significantly lower latency than Whisper in live speech applications, often running 5x faster or more.
However we kept encountering applications that needed even lower latencies on even more constrained platforms. We also wanted to offer higher accuracy than the Base-equivalent that was the top end of the initial models. That led us to this second generation of Moonshine models, which offer:
Hopefully this gives you a good idea of how Moonshine compares to Whisper. If you're working with GPUs in the cloud on data in bulk where throughput is most important then Whisper (or Nvidia alternatives like Parakeet) offer advantages like batch processing, but we believe we can't be beat for live speech. We've built the framework and models we wished we'd had when we first started building applications with voice interfaces, so if you're working with live voice inputs, give Moonshine a try.
The Moonshine API is designed to take care of the details around capturing and transcribing live speech, giving application developers a high-level API focused on actionable events. I'll use Python to illustrate how it works, but the API is consistent across all the supported languages.
Our goal is to build a framework that any developer can pick up and use, even with no previous experience of speech technologies. We've abstracted away a lot of the unnecessary details and provide a simple interface that lets you focus on building your application, and that's reflected in our system architecture.
The basic flow is:
Transcriber or IntentRecognizer object, depending on whether you want the text that's spoken, or just to know that a user has requested an action.EventListener that gets called when important things occur, like the end of a phrase or an action being triggered, so your application can respond.Traditionally, adding a voice interface to an application or product required integrating a lot of different libraries to handle all the processing that's needed to capture audio and turn it into something actionable. The main steps involved are microphone capture, voice activity detection (to break a continuous stream of audio into sections of speech), speech to text, speaker identification, and intent recognition. Each of these steps typically involved a different framework, which greatly increased the complexity of integrating, optimizing, and maintaining these dependencies.
Moonshine Voice includes all of these stages in a single library, and abstracts away everything but the essential information your application needs to respond to user speech, whether you want to transcribe it or trigger actions.
Most developers should be able to treat the library as a black box that tells them when something interesting has happened, using our event-based classes to implement application logic. Of course the framework is fully open source, so speech experts can dive as deep under the hood as they'd like, but it's not necessary to use it.
A Transcriber takes in audio input and turns any speech into text. This is the first object you'll need to create to use Moonshine, and you'll give it a path to the models you've downloaded.
A MicTranscriber is a helper class based on the general transcriber that takes care of connecting to a microphone using your platform's built-in support (for example sounddevice in Python) and then feeding the audio in as it's captured.
A Stream is a handler for audio input. The reason streams exist is because you may want to process multiple audio inputs at once, and a transcriber can support those through multiple streams, without duplicating the model resources. If you only have one input, the transcriber class includes the same methods (start/stop/add_audio) as a stream, and you can use that interface instead and forget about streams.
A TranscriptLine is a data structure holding information about one line in the transcript. When someone is speaking, the library waits for short pauses (where punctuation might go in written language) and starts a new line. These aren't exactly sentences, since a speech pause isn't a sure sign of the end of a sentence, but this does break the spoken audio into segments that can be considered phrases. A line includes state such as whether the line has just started, is still being spoken, or is complete, along with its start time and duration.
A Transcript is a list of lines in time order holding information about what text has already been recognized, along with other state like when it was captured.
A TranscriptEvent contains information about changes to the transcript. Events include a new line being started, the text in a line being updated, and a line being completed. The event object includes the transcript line it's referring to as a member, holding the latest state of that line.
A TranscriptEventListener is a protocol that allows app-defined functions to be called when transcript events happen. This is the main way that most applications interact with the results of the transcription. When live speech is happening, applications usually need to respond or display results as new speech is recognized, and this approach allows you to handle those changes in a similar way to events from traditional user interfaces like touch screen gestures or mouse clicks on buttons.
An IntentRecognizer is a type of TranscriptEventListener that allows you to invoke different callback functions when preprogrammed intents are detected. This is useful for building voice command recognition features.
We have examples for most platforms so as a first step I recommend checking out what we have for the systems you're targeting.
Next, you'll need to add the library to your project. We aim to provide pre-built binaries for all major platforms using their native package managers. On Python this means a pip install, for Android it's a Maven package, and for MacOS and iOS we provide a Swift package through SPM.
The transcriber needs access to the files for the model you're using, so after downloading them you'll need to place them somewhere the application can find them, and make a note of the path. This usually means adding them as resources in your IDE if you're planning to distribute the app, or you can use hard-wired paths if you're just experimenting. The download script gives you the location of the models and their architecture type on your drive after it completes.
Now you can try creating a transcriber. Here's what that looks like in Python:
transcriber = Transcriber(model_path=model_path, model_arch=model_arch)If the model isn't found, or if there's any other error, this will throw an exception with information about the problem. You can also check the console for logs from the core library, these are printed to stderr or your system's equivalent.
Now we'll create a listener that contains the app logic that you want triggered when the transcript updates, and attach it to your transcriber:
class TestListener(TranscriptEventListener):
def on_line_started(self, event):
print(f"Line started: {event.line.text}")
def on_line_text_changed(self, event):
print(f"Line text changed: {event.line.text}")
def on_line_completed(self, event):
print(f"Line completed: {event.line.text}")
transcriber.add_listener(listener)The transcriber needs some audio data to work with. If you want to try it with the microphone you can update your transcriber creation line to use a MicTranscriber instead, but if you want to start with a .wav file for testing purposes here's how you feed that in:
audio_data, sample_rate = load_wav_file(wav_path)
transcriber.start()
# Loop through the audio data in chunks to simulate live streaming
# from a microphone or other source.
chunk_duration = 0.1
chunk_size = int(chunk_duration * sample_rate)
for i in range(0, len(audio_data), chunk_size):
chunk = audio_data[i: i + chunk_size]
transcriber.add_audio(chunk, sample_rate)
transcriber.stop()The important things to notice here are:
load_wav_file() function that's part of the Moonshine library.In a real application you'd be calling add_audio() from an audio handler that's receiving it from your source. Since the library can handle arbitrary durations and sample rates, just make sure it's mono and otherwise feed it in as-is.
The transcriber analyses the speech at a default interval of every 500ms of input. You can change this with the update_interval argument to the transcriber constructor. For streaming models most of the work is done as the audio is being added, and it's automatically done at the end of a phrase, so changing this won't usually affect the workload or latency massively.
The key takeaway is that you usually don't need to worry about the transcript data structure itself, the event system tells you when something important happens. You can manually trigger a transcript update by calling update_transcription() which returns a transcript object with all of the information about the current session if you do need to examine the state.
By calling start() and stop() on a transcriber (or stream) we're beginning and ending a session. Each session has one transcript document associated with it, and it is started fresh on every start() call, so you should make copies of any data you need from the transcript object before that.
The transcriber class also offers a simpler transcribe_without_streaming() method, for when you have an array of data from the past that you just want to analyse, such as a file or recording.
We also offer a specialization of the base Transcriber class called MicTranscriber. How this is implemented will depend on the language and platform, but it should provide a transcriber that's automatically attached to the main microphone on the system. This makes it straightforward to start transcribing speech from that common source, since it supports all of the same listener callbacks as the base class.
The main communication channel between the library and your application is through events that are passed to any listener functions you have registered. There are four major event types:
LineStarted. This is sent to listeners when the beginning of a new speech segment is detected. It may or may not contain any text, but since it's dispatched near the start of an utterance, that text is likely to change over time.LineUpdated. Called whenever any of the information about a line changes, including the duration, audio data, and text.LineTextChanged. Called only when the text associated with a line is updated. This is a subset of LineUpdated that focuses on the common need to refresh the text shown to users as often as possible to keep the experience interactive.LineCompleted. Sent when we detect that someone has paused speaking, and we've ended the current segment. The line data structure has the final values for the text, duration, and speaker ID.We offer some guarantees about these events:
LineStarted is always called exactly once for any segment.LineCompleted is always called exactly once after LineStarted for any segment.LineUpdated and LineTextChanged will only ever be called after the LineStarted and before the LineCompleted events for a segment.update_interval to a very large value).LineCompleted has been called, the library will never alter that line's data again.stop() is called on a transcriber or stream, any active lines will have LineCompleted called.lineId that is designed to be unique enough to avoid collisions.lineId remains the same for the line over time, from the first LineStarted event onwards.If you want your application to respond when users talk, you need to understand what they're saying. The previous generation of voice interfaces could only recognize speech that was phrased in exactly the form they expected. For example "Alexa, turn on living-room lights" might work, but "Alexa, lights on in the living room please" might not. The general problem of figuring out what a user wants from natural speech is known as intent recognition. There have been decades of research into this area, but the rise of transformer-based LLMs has given us new tools. We have integrated some of these advances into Moonshine Voice's command recognition API.
The basic idea is that your application registers some general actions you're interested in, like "Turn the lights on" or "Move left", and then Moonshine sends an event when the user says something that matches the meaning of those phrases. It works a lot like a graphical user interface - you define a button (action) and an event callback that is triggered when the user presses that button.
To give it a try for yourself, run this built-in example:
python -m moonshine_voice.intent_recognizerThis will present you with a menu of command phrases, and then start listening to the microphone. If you say something that's a variant on one of the phrases you'll see a "triggered" log message telling you which action was matched, along with how confident the system is in the match.
📝 Let there be light.
'TURN ON THE LIGHTS' triggered by 'Let there be light.' with 76% confidenceTo show that you can modify these at run time, try supplying your own list of phrases as a comma-separated string argument to --intents.
python -m moonshine_voice.intent_recognizer --intents "Turn left, turn right, go backwards, go forward"This could be the core command set to control a robot's movement for example. It's worth spending a bit of time experimenting with different wordings of the command phrases, and different variations on the user side, to get a feel for how the system works.
Under the hood this is all accomplished using two main classes. We've met the MicTranscriber above, the new addition is IntentRecognizer. This listens to the results of the transcriber, fuzzily matches completed lines against any intents that have been registered with it, and calls back the client-supplied code.
The fuzzy matching uses a sentence-embedding model based on Gemma300m, so the first step is downloading it and getting the path:
embedding_model_path, embedding_model_arch = get_embedding_model(
args.embedding_model, args.quantization
)Once we have the model's location, we create an IntentRecognizer using that path. The only other argument is the threshold we use for fuzzy matching. It's between 0 and 1, with low numbers producing more matches but at the cost of less accuracy, and vice versa for high values.
intent_recognizer = IntentRecognizer(
model_path=embedding_model_path,
model_arch=embedding_model_arch,
model_variant=args.quantization,
threshold=args.threshold,
)Next we tell the recognizer what kinds of phrases to listen out for, and what to do when there's a match.
def on_intent_triggered_on(trigger: str, utterance: str, similarity: float):
print(f"\n'{trigger.upper()}' triggered by '{utterance}' with {similarity:.0%} confidence")
for intent in intents:
intent_recognizer.register_intent(intent, on_intent_triggered_on)The recognizer supports the transcript event listener interface, so the final stage is adding it as a listener to the MicTranscriber.
mic_transcriber.add_listener(intent_recognizer)Once you start the transcriber, it will listen out for any variations on the supplied phrases, and call on_intent_triggered_on() whenever there's a match.
The current intent recognition is designed for full-sentence matching, which works well for straightforward commands, but we will be expanding into more advanced "slot filling" techniques in the future, to handle extracting the quantity from "I want ten bananas" for example.
The examples folder has code samples organized by platform. We offer these for Android, portable C++, iOS, MacOS, Python, and Windows. We have tried to use the most common build system for each platform, so Android uses Android Studio and Maven, iOS and MacOS use Xcode and Swift, while Windows uses Visual Studio.
The examples usually include one minimal project that just creates a transcriber and then feeds it data from a WAV file, and another that's pulling audio from a microphone using the platform's default framework for accessing audio devices.
We distribute the library through the most widely-used package managers for each platform. Here's how you can use these to add the framework to an existing project on different systems.
The Python package is hosted on PyPi, so all you should need to do to install it is pip install moonshine-voice, and then import moonshine_voice in your project.
For iOS we use the Swift Package Manager, with an auto-updated GitHub repository holding each version. To use this right-click on the file view sidebar in Xcode and choose "Add Package Dependencies..." from the menu. A dialog should open up, paste https://github.com/moonshine-ai/moonshine-swift/ into the top search box and you should see moonshine-swift. Select it and choose "Add Package", and it should be added to your project. You should now be able to import MoonshineVoice and use the library. You will need to add any model files you use to your app bundle and ensure they're copied during the deployment phase, so they can be accessed on-device.
For reference purposes you can find Xcode projects with these changes applied in examples/ios/Transcriber and examples/macos/BasicTranscription.
On Android we publish the package to Maven. To include it in your project using Android Studio and Gradle, first add the version number you want to the gradle/libs.versions.toml file by inserting a line in the [versions] section, for example moonshineVoice = "0.0.48". Then in the [libraries] part, add a reference to the package: moonshine-voice = { group = "ai.moonshine", name = "moonshine-voice", version.ref = "moonshineVoice" }.
Finally, in your app/build.gradle.kts add the library to the dependencies list: implementation(libs.moonshine.voice). You can find a working example of all these changes in [examples/android/Transcriber].
We couldn't find a single package manager that is used by most Windows developers, so instead we've made the raw library and headers available as a download. The script in examples/windows/cli-transcriber/download-lib.bat will fetch these for you. You'll see an include folder that you should add to the include search paths in your project settings, and a lib directory that you should add to the include search paths. Then add all of the library files in the lib folder to your project's linker dependencies.
The recommended interface to use on Windows is the C++ language binding. This is a header-only library that offers a higher-level API than the underlying C version. You can #include "moonshine-cpp.h" to access Moonshine from your C++ code. If you want to see an example of all these changes together, take a look at examples/windows/cli-transcriber.
The library is designed to help you understand what's going wrong when you hit an issue. If something isn't working as expected, the first place to look is the console for log messages. Whenever there's a failure point or an exception within the core library, you should see a message that adds more information about what went wrong. Your language bindings should also recognize when the core library has returned an error and raise an appropriate exception, but sometimes the logs can be helpful because they contain more details.
If no errors are being reported but the quality of the transcription isn't what you expect, it's worth ruling out an issue with the audio data that the transcriber is receiving. To make this easier, you can pass in the save_input_wav_path option when you create a transcriber. That will save any audio received into .wav files in the folder you specify. Here's a Python example:
python -m moonshine_voice.transcriber --options='save_input_wav_path=.'This will run test audio through a transcriber, and write out the audio it has received into an input_1.wav file in the current directory. If you're running multiple streams, you'll see input_2.wav, etc for each additional one. These wavs only contain the audio data from the latest session, and are overwritten after each one is started. Listening to these files should help you confirm that the input you're providing is as you expect it, and not distorted or corrupted.
If you're running into errors it can be hard to keep track of the timeline of your interactions with the library. The log_api_calls option will print out the underlying API calls that have been triggered to the console, so you can investigate any ordering or timing issues.
uv run -m moonshine_voice.transcriber --options='log_api_calls=true'If you want to debug into the library internals, or add instrumentation to help understand its operation, or add improvements or customizations, all of the source is available for you to build it for yourself.
The core engine of the library is contained in the core folder of this repo. It's written in C++ with a C interface for easy integration with other languages. We use cmake to build on all our platforms, and so the easiest way to get started is something like this:
cd core
mkdir -p build
cd build
cmake ..
cmake --build .After that completes you should have a set of binary executables you can run on your own system. These executables are all unit tests, and expect to be run from the test-assets folder. You can run the build and test process in one step using the scripts/run-core-tests.sh, or scripts/run-core-tests.bat for Windows. All tests should compile and run without any errors.
There are various scripts for building for different platforms and languages, but to see examples of how to build for all of the supported systems you should look at scripts/build-all-platforms.sh. This is the script we call for every release, and it builds all of the artifacts we upload to the various package manager systems.
The different platforms and languages have a layer on top of the C interfaces to enable idiomatic use of the library within the different environments. The major systems have their own top-level folders in this repo, for example: python, android, and swift for iOS and MacOS. This is where you'll find the code that calls the underlying core library routines, and handles the event system for each platform.
If you have a device that isn't supported, you can try building using cmake on your system. The only major dependency that the C++ core library has is the Onnx Runtime. We include pre-built binary library files for all our supported systems, but you'll need to find or build your own version if the libraries we offer don't cover your use case.
If you want to call this library from a language we don't support, then you should take a look at the C interface bindings. Most languages have some way to call into C functions, so you can use these and the binding examples for other languages to guide your implementation.
The easiest way to get the model files is using the Python module. After installing it run the downloader like this:
python -m moonshine_voice.download --language enYou can use either the two-letter code or the English name for the language argument. If you want to see which languages are supported by your current version they're listed below, or you can supply a bogus language as the argument to this command:
python -m moonshine_voice.download --language fooYou can also optionally request a specific model architecture using the model-arch flag, chosen from the numbers in moonshine-c-api.h. If no architecture is set, the script will load the highest-quality model available.
The download script will log the location of the downloaded model files and the model architecture, for example:
encoder_model.ort: 100%|███████████████████████████████████████████████████████| 29.9M/29.9M [00:00<00:00, 34.5MB/s]
decoder_model_merged.ort: 100%|██████████████████████████████████████████████████| 104M/104M [00:02<00:00, 52.6MB/s]
tokenizer.bin: 100%|█████████████████████████████████████████████████████████████| 244k/244k [00:00<00:00, 1.44MB/s]
Model download url: https://download.moonshine.ai/model/base-en/quantized/base-en
Model components: ['encoder_model.ort', 'decoder_model_merged.ort', 'tokenizer.bin']
Model arch: 1
Downloaded model path: /Users/petewarden/Library/Caches/moonshine_voice/download.moonshine.ai/model/base-en/quantized/base-enThe last two lines tell you which model architecture is being used, and where the model files are on disk. By default it uses your user cache directory, which is ~/Library/Caches/moonshine_voice on MacOS, but you can use a different location by setting the MOONSHINE_VOICE_CACHE environment variable before running the script.
The core library includes a benchmarking tool that simulates processing live audio by loading a .wav audio file and feeding it in chunks to the model. To run it:
cd core
md build
cd build
cmake ..
cmake --build . --config Release
./benchmark
This will report the absolute time taken to process the audio, what percentage of the audio file's duration that is, and the average latency for a response.
The percentage is helpful because it approximates how much of a compute load the model will be on your hardware. For example, if it shows 20% then that means the speech processing will take a fifth of the compute time when running in your application, leaving 80% for the rest of your code.
The latency metric needs a bit of explanation. What most applications care about is how soon they are notified about a phrase after the user has finished talking, since this determines how fast the product can respond. As with any user interface, the time between speech ending and the app doing something determines how responsive the voice interface feels, with a goal of keeping it below 200ms. The latency figure logged here is the average time between when the library determines the user has stopped talking and the delivery of the final transcript of that phrase to the client. This is where streaming models have the most impact, since they do a lot of their work upfront, while speech is still happening, so they can usually finish very quickly.
By default the benchmark binary uses the Tiny English model that's embedded in the framework, but you can pass in the --model-path and --model-arch parameters to choose one that you've downloaded.
You can also choose how often the transcript should be updated using the --transcription-interval argument. This defaults to 0.5 seconds, but the right value will depend on how fast your application needs updates. Longer intervals reduce the compute required a bit, at the cost of slower updates.
For platforms that support Python, you can run the scripts/run-benchmarks.py script which will evaluate similar metrics, with the advantage that it can also download the models so you don't need to worry about path handling.
It also evaluates equivalent Whisper models. This is a pretty opinionated benchmark that looks at the latency and total compute cost of the two families of models in a situation that is representative of many common real-time voice applications' requirements:
These are very different requirements from bulk offline processing scenarios, where the overall throughput of the system is more important, and so the latency on a single segment of speech is less important than the overall throughput of the system. This allows optimizations like batch processing.
We are not claiming that Whisper is not a great model for offline processing, but we do want to highlight the advantages we that Moonshine offers for live speech applications with real-time latency requirements.
The experimental setup is as follows:
Moonshine Voice is based on a family of speech to text models created by the team at Moonshine AI. If you want to download models to use with the framework, you can use the Python package to access them. This section contains more information about the history and characteristics of the models we offer.
These research papers are a good resource for understanding the architectures and performance strategies behind the models:
Here are the models currently available. See Downloading Models for how to obtain them.
| Language | Architecture | # Parameters | WER/CER |
|---|---|---|---|
| English | Tiny | 26 million | 12.66% |
| English | Tiny Streaming | 34 million | 12.00% |
| English | Base | 58 million | 10.07% |
| English | Small Streaming | 123 million | 7.84% |
| English | Medium Streaming | 245 million | 6.65% |
| Arabic | Base | 58 million | 5.63% |
| Japanese | Base | 58 million | 13.62% |
| Korean | Tiny | 26 million | 6.46% |
| Mandarin | Base | 58 million | 25.76% |
| Spanish | Base | 58 million | 4.33% |
| Ukrainian | Base | 58 million | 14.55% |
| Vietnamese | Base | 58 million | 8.82% |
The English evaluations were done using the HuggingFace OpenASR Leaderboard datasets and methodology. The other languages were evaluated using the FLEURS dataset and the scripts/eval-model-accuracy script, with the character or word error rate chosen per language.
One common issue to watch out for if you're using models that don't use the Latin alphabet (so any languages except English and Spanish) is that you'll need to set the max_tokens_per_second option to 13.0 when you create the transcriber. This is because the most common pattern for hallucinations is endlessly repeating the last few words, and our heuristic to detect this is to check if there's an unusually high number of tokens for the duration of a segment. Unfortunately the base number of tokens per second for non-Latin languages is much higher than for English, thanks to how we're tokenizing, so you have to manually set the threshold higher to avoid cutting off valid outputs.
It's often useful to be able to calibrate a speech to text model towards certain words that you're expecting to hear in your application, whether it's technical terms, slang, or a particular dialect or accent. Moonshine AI offers full retraining using our internal dataset for customization as a commercial service and we do hope to support free lighter-weight approaches in the future. You can find a community project working on this at github.com/pierre-cheneau/finetune-moonshine-asr.
This documentation covers the Python API, but the same functions and classes are present in all the other supported languages, just with native adaptations (for example CamelCase). You should be able to use this as a reference for all platforms the library runs on.
Represents a single "line" or speech segment in a transcript. It includes information about the timing, speaker, and text content of the utterance, as well as state such as whether the speech is ongoing or done. If you're building an application that involves transcription, this data structure has all of the information available about each line of speech. Be aware that each line can be updated multiple times with new text and other information as the user keeps speaking.
text: A string containing the UTF-8 encoded text that has been extracted from the audio of this segment.
start_time: A float value representing the time in seconds since the start of the current session that the current utterance was first detected.
duration: A float that represents the duration in seconds of the current utterance.
line_id: An unsigned 64-bit integer that represents a line in a collision-resistant way, for use in storage and ensuring the application can keep track of lines as they change over time. See Transcription Event Flow for more details.
is_complete: A boolean that is false until the segment has been completed, and true for the remainder of the line's lifetime.
is_updated: A boolean that's true if any information about the line has changed since the last time the transcript was updated. Since the transcript will be periodically updated internally by the library as you add audio chunks, you can't rely on polling this to detect changes. You should rely on the event/listener flow to catch modifications instead. This applies to all of the booleans below too.
is_new: A boolean indicating whether the line has been added to the transcript by the last update call.
has_text_changed: A boolean that's set if the contents of the line's text was modified by the last transcript update. If this is set, is_updated will always be set too, but if other properties of the line (for example the duration or the audio data) have changed but the text remains the same, then is_updated can be true while has_text_changed is false.
has_speaker_id: Whether a speaker has been identified for this line. Unless the identify_speakers option passed to the Transcriber is set to false, this will always be true by the time the line is complete, and potentially it may be set earlier. The speaker identification process is still experimental, so the current accuracy may not be reliable enough for some applications.
speaker_id: A unique-ish unsigned 64-bit integer that is designed for storage or used to identify the same speaker across multiple sessions.
speaker_index: An integer that represents the order in which the speaker appeared in the transcript, to make it easy to give speakers default names like "Speaker 1:", etc.
audio_data: An array of 32-bit floats representing the raw audio data that the line is based on, as 16KHz mono PCM data between 0.0 and 1.0. This can be useful for further processing (for example to drive a visual indicator or to feed into a specialized speech to text model after the line is complete).
A Transcript contains a list of TranscriberLines, arranged in descending time order. The transcript is reset at every Transcriber.start() call, so if you need to retain information from it, you should make explicit copies. Most applications won't work with this structure, since all of the same information is available through event callbacks.
Contains information about a change to the transcript. It has four subclasses, which are explained in more detail in the transcription event flow section. Most of the information is contained in the line member, but there's also a stream_handle that your application can use to tell the source of a line if you're running multiple streams.
This event is sent to any listeners you have registered when an IntentRecognizer finds a match to a command you've specified.
trigger_phrase: The string representing the canonical command, exactly as you registered it with the recognizer.utterance: The text of the utterance that triggered the match.similarity: A float value that reflects how confident the recognizer is that the utterance has the same meaning as the command, with zero being the least confident and one the most.Handles the speech to text pipeline.
__init__(): Loads and initializes the transcriber.
model_path: The path to the directory holding the component model files needed for the complete flow. Note that this is a path to the folder, not an individual file. You can download and get a path to a cached version of the standard models using the download_model() function.model_arch: The architecture of the model to load, from the selection defined in ModelArch.update_interval: By default the transcriber will periodically run text transcription as new audio data is fed, so that update events can be triggered. This value is how often the speech to text model should be run. You can set this to a large duration to suppress updates between a line starting and ending, but because the streaming models do a lot of their work before the final speech to text stage, this may not reduce overall latency by much.options: These are flags that affect how the transcription process works inside the library, often enabling performance optimizations or debug logging. They are passed as a dictionary mapping strings to strings, even if the values are to be interpreted as numbers - for example {"max_tokens_per_second", "15"}.
skip_transcription: If you only want the voice-activity detection and segmentation, but want to do further processing in your app, you can set this to "true" and then use the audioData array in each line.max_tokens_per_second: The models occassionally get caught in an infinite decoder loop, where the same words are repeated over and over again. As a heuristic to catch this we compare the number of tokens in the current run to the duration of the audio, and if there seem to be too many tokens we truncate the decoding. By default this is set to 6.5, but for non-English languages where the models produce a lot more raw tokens per second, you may want to bump this to 13.0.transcription_interval: How often to run transcription, in seconds.vad_threshold: Controls the sensitivity of the initial voice-activity detection stage that decides how to break raw audio into segments. This defaults to 0.5, with lower values creating longer segments, potentially with more background noise sections, and higher values breaking up speech into smaller chunks, at the risk of losing some actual speech by clipping.save_input_wav_path: One of the most common causes of poor transcription quality is incorrect conversion or corruption of the audio that's fed into the pipeline. If you set this option to a folder path, the transcriber will save out exactly what it has received as 16KHz mono WAV files, so you can ensure that your input audio is as you expect.log_api_calls: Another debugging option, turning this on causes all calls to the C API entry points in the library to write out information on their arguments to stderr or the console each time they're run.log_ort_runs: Prints information about the ONNXRuntime inference runs and how long they take.vad_window_duration: The VAD runs every 30ms, but to get higher-confidence values we average the results over time. This value is the time in seconds to average over. The default is 0.5s, shorter durations will spot speech faster at the cost of lower accuracy, higher values may increase accuracy, but at the cost of missing shorter utterances.vad_look_behind_sample_count: Because we're averaging over time, the mean VAD signal will lag behind the initial speech detection. To compensate for that, when speech is detected we pull in some of the audio immediately before the average passed the threshold. This value is the number of samples to prepend, and defaults to 8192 (all at 16KHz).vad_max_segment_duration: It can be hard to find gaps in rapid-fire speech, but a lot of applications want their text in chunks that aren't endless. This option sets the longest duration a line can be before it's marked as complete and a new segment is started. The default is 15 seconds, and to increase the chance that a natural break is found, the vad_threshold is linearly decreased over time from two thirds of the maximum duration until the maximum is reached.identify_speakers: A boolean that controls whether to run the speaker identification stage in the pipeline.transcribe_without_streaming(): A convenience function to extract text from a non-live audio source, such as a file. We optimize for streaming use cases, so you're probably better off using libraries that specialize in bulk, batched transcription if you use this a lot and have performance constraints. This will still call any registered event listeners as it processes the lines, so this can be useful to test your application using pre-recorded files, or to easily integrate offline audio sources.
audio_data: An array of 32-bit float values, representing mono PCM audio between -1.0 and 1.0, to be analyzed for speech.sample_rate: The number of samples per second. The library uses this to convert to its working rate (16KHz) internally.flags: Integer, currently unused.start(): Begins a new transcription session. You need to call this after you've created the Transcriber and before you add any audio.
stop(): Ends a transcription session. If a speech segment was still active, it's marked as complete and the appropriate event handlers are called.
add_audio(): Call this every time you have a new chunk of audio from your input, to begin processing. The size and sample rate of the audio should be whatever's natural for your source, since the library will handle all conversions.
audio_data: Array of 32-bit floats representing a mono PCM chunk of audio.sample_rate: How many samples per second are present in the input audio. The library uses this to convert the data to its preferred rate.update_transcription: The transcript is usually updated periodically as audio data is added, but if you need to trigger one yourself, for example when a user presses refresh, or want access to the complete transcript, you can call this manually.
flags: Integer holding flags that are combined using bitwise or (|).
MOONSHINE_FLAG_FORCE_UPDATE: By default the transcriber returns a cached version of the transcript if less than 200ms of new audio has come in since the last transcription, but by setting this you can ensure that a transcription happens regardless.create_stream(): If your application is taking audio input from multiple sources, for example a microphone and system audio, then you'll want to create multiple streams on a single transcriber to avoid loading multiple copies of the models. Each stream has its own transcript, and line events are tagged with the stream handle they came from. You don't need to worry about this if you only need to deal with a single input though, just use the Transcriber class's start(), stop(), etc. This function returns Stream class object.
flags: Integer, reserved for future expansion.update_interval: Period in seconds between transcription updates.add_listener(): Registers a callable object with the transcriber. This object will be called back as audio is fed in and text is extracted.
listener: This is often a subclass of TranscriptEventListener, but can be a plain function. It defines what code is called when a speech event happens.remove_listener(): Deletes a listener so that it no longer receives events.
listener: An object you previously passed into add_listener().remove_all_listeners(): Deletes all registered listeners so than none of them receive events anymore.
This class supports the []start()](#transcriber-start), stop() and listener functions of Transcriber, but internally creates and attaches to the system's microphone input, so you don't need to call add_audio() yourself. In Python this uses the sounddevice library, but in other languages the class uses the native audio API under the hood.
The access point for when you need to feed multiple audio inputs into a single transcriber. Supports start(), stop(), add_audio(), update_transcription(), add_listener(), remove_listener(), and remove_all_listeners() as documented in the Transcriber class.
A convenience class to derive from to create your own listener code. Override any or all of on_line_started(), on_line_updated(), on_line_text_changed(), and on_line_completed(), and they'll be called back when the corresponding event occurs.
A specialized kind of event listener that you add as a listener to a Transcriber, and it then analyzes the transcription results to determine if any of the specified commands have been spoken, using natural-language fuzzy matching.
__init__(): Constructs a new recognizer, loading required models.
model_path: String holding a path to a folder that contains the required embedding model files. You can download and obtain a path by calling download_embedding_model().model_arch: An EmbeddingModelArch, obtained from the download_embedding_model() function.model_variant: The precision to run the model at. "q4" is recommended.threshold: How close an utterance has to be to the target sentence to trigger an event.register_intent(): Asks the recognizer to look for utterances that match a given command, and call back into the application when one is found.
trigger_phrase: The canonical command sentence to match against.handler: A callable function or object that contains code you want to trigger when the command is recognized.unregister_intent(): Removes an intent handler from the event callback process.
handler: A handler that had previously been registered with the recognizer.clear_intents(): Removes all intent listeners from the recognizer.set_on_intent(): Sets a callable that is called when any registered action is triggered, not just a single command as for register_intent().Our primary support channel is the Moonshine Discord. We make our best efforts to respond to questions there, and other channels like GitHub issues. We also offer paid support for commercial customers who need porting or acceleration on other platforms, model customization, more languages, or any other services, please get in touch.
This library is in active development, and we aim to implement:
We're grateful to:
This code, apart from the source in core/third-party, is licensed under the MIT License, see LICENSE in this repository.
The English-language models are also released under the MIT License. Models for other languages are released under the Moonshine Community License, which is a non-commercial license.
The code in core/third-party is licensed according to the terms of the open source projects it originates from, with details in a LICENSE file in each subfolder.