Module blindai.audio
Classes
class Audio
-
Static methods
def transcribe(file: Union[str, bytes], model: str = 'tiny.en', connection: Optional[ForwardRef('BlindAiConnection')] = None, tee: Optional[str] = 'sgx') ‑> str
-
BlindAI Whisper API which converts speech to text based on the model passed.
Args
file
- str, bytes Audio file to transcribe. It may also receive serialized bytes of wave file.
model
- str The Whisper model. Defaults to "medium".
connection
- Optional[BlindAiConnection] The BlindAI connection object. Defaults to None.
tee
- Optional[str] The Trusted Execution Environment to use. Defaults to "sgx". Unused, at the moment.
Returns
Dict: The transcription object containing, text and the tokens