[ Overview || TCP | C++ | Python | REST | WebSocket || Models | Customization | Deployment | Licensing ]
It is inevitable that speech recognition errors will occasionally occur. There are many reasons this happens. For example, if a word doesn't occur in a model, the word will always be misrecognized regardless of how carefully it is spoken. You can use the custom words feature to add such words to a model. Use custom pronunciations to have even more fine-grained control over how these words are recognized.
Even if all the words are known, sometimes a particular word or phrase may be misrecognized. For example, if a regular meeting has a participant named "Geoff Stennis", and the recognizer consistently outputs "Jeff's tennis", you could use phrase biases to increase the likelihood of "Geoff Stennis".
A custom grammar can be used to recognize a highly structured grammar rather than a model's default conversational grammar. For example, if you know an audio file only contains a phone number, you could use a custom grammar to recognize only a phone number. The related add-grammar command allows simultaneous recognition of the grammar alongside a large-vocabulary ASR model.
Normally the ASR Engine will segment an audio file automatically. The ASR Engine supports extensive customization of how this occurs using endpoint rules.
©2019-2022 Mod9 Technologies (Version 1.9.5)