-
Notifications
You must be signed in to change notification settings - Fork 277
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #144 from makaveli10/update_readme
add whisper live demo video.
- Loading branch information
Showing
1 changed file
with
18 additions
and
27 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,9 +1,15 @@ | ||
# whisper-live | ||
A nearly-live implementation of OpenAI's Whisper. | ||
# WhisperLive | ||
|
||
This project is a real-time transcription application that uses the OpenAI Whisper model to convert speech input into text output. It can be used to transcribe both live audio input from microphone and pre-recorded audio files. | ||
<h2 align="center"> | ||
<a href="https://www.youtube.com/watch?v=0PHWCApIcCI"><img | ||
src="https://img.youtube.com/vi/0PHWCApIcCI/0.jpg" style="background-color:rgba(0,0,0,0);" height=300 alt="WhisperLive"></a> | ||
<br><br>A nearly-live implementation of OpenAI's Whisper. | ||
<br><br> | ||
</h2> | ||
|
||
Unlike traditional speech recognition systems that rely on continuous audio streaming, we use [voice activity detection (VAD)](https://github.com/snakers4/silero-vad) to detect the presence of speech and only send the audio data to whisper when speech is detected. This helps to reduce the amount of data sent to the whisper model and improves the accuracy of the transcription output. | ||
This project is a real-time transcription application that uses the OpenAI Whisper model | ||
to convert speech input into text output. It can be used to transcribe both live audio | ||
input from microphone and pre-recorded audio files. | ||
|
||
## Installation | ||
- Install PyAudio and ffmpeg | ||
|
@@ -50,7 +56,7 @@ python3 run_server.py -p 9090 \ | |
|
||
|
||
### Running the Client | ||
- To transcribe an audio file: | ||
- Initializing the client: | ||
```python | ||
from whisper_live.client import TranscriptionClient | ||
client = TranscriptionClient( | ||
|
@@ -60,41 +66,27 @@ client = TranscriptionClient( | |
translate=False, | ||
model="small" | ||
) | ||
``` | ||
It connects to the server running on localhost at port 9090. Using a multilingual model, language for the transcription will be automatically detected. You can also use the language option to specify the target language for the transcription, in this case, English ("en"). The translate option should be set to `True` if we want to translate from the source language to English and `False` if we want to transcribe in the source language. | ||
|
||
- Trancribe an audio file: | ||
```python | ||
client("tests/jfk.wav") | ||
``` | ||
This command transcribes the specified audio file (audio.wav) using the Whisper model. It connects to the server running on localhost at port 9090. Using a multilingual model, language for the transcription will be automatically detected. You can also use the language option to specify the target language for the transcription, in this case, English ("en"). The translate option should be set to `True` if we want to translate from the source language to English and `False` if we want to transcribe in the source language. | ||
|
||
- To transcribe from microphone: | ||
```python | ||
from whisper_live.client import TranscriptionClient | ||
client = TranscriptionClient( | ||
"localhost", | ||
9090, | ||
lang="hi", | ||
translate=True, | ||
model="small" | ||
) | ||
client() | ||
``` | ||
This command captures audio from the microphone and sends it to the server for transcription. It uses the multilingual model with `hi` as the selected language. We use whisper `small` by default but can be changed to any other option based on the requirements and the hardware running the server. | ||
|
||
- To transcribe from a HLS stream: | ||
```python | ||
from whisper_live.client import TranscriptionClient | ||
client = TranscriptionClient(host, port, lang="en", translate=False) | ||
client(hls_url="http://as-hls-ww-live.akamaized.net/pool_904/live/ww/bbc_1xtra/bbc_1xtra.isml/bbc_1xtra-audio%3d96000.norewind.m3u8") | ||
``` | ||
This command streams audio into the server from a HLS stream. It uses the same options as the previous command, using the multilingual model and specifying the target language and task. | ||
|
||
## Transcribe audio from browser | ||
- Run the server with your desired backend as shown [here](https://github.com/collabora/WhisperLive?tab=readme-ov-file#running-the-server) | ||
|
||
### Chrome Extension | ||
- Refer to [Audio-Transcription-Chrome](https://github.com/collabora/whisper-live/tree/main/Audio-Transcription-Chrome#readme) to use Chrome extension. | ||
|
||
### Firefox Extension | ||
- Refer to [Audio-Transcription-Firefox](https://github.com/collabora/whisper-live/tree/main/Audio-Transcription-Firefox#readme) to use Mozilla Firefox extension. | ||
## Browser Extensions | ||
- Run the server with your desired backend as shown [here](https://github.com/collabora/WhisperLive?tab=readme-ov-file#running-the-server). | ||
- Transcribe audio directly from your browser using our Chrome or Firefox extensions. Refer to [Audio-Transcription-Chrome](https://github.com/collabora/whisper-live/tree/main/Audio-Transcription-Chrome#readme) and [Audio-Transcription-Firefox](https://github.com/collabora/whisper-live/tree/main/Audio-Transcription-Firefox#readme) for setup instructions. | ||
|
||
## Whisper Live Server in Docker | ||
- GPU | ||
|
@@ -140,6 +132,5 @@ We are available to help you with both Open Source and proprietary AI projects. | |
publisher = {GitHub}, | ||
journal = {GitHub repository}, | ||
howpublished = {\url{https://github.com/snakers4/silero-vad}}, | ||
commit = {insert_some_commit_here}, | ||
email = {[email protected]} | ||
} |