diff --git a/README.md b/README.md
index c4712df1..f62292e4 100644
--- a/README.md
+++ b/README.md
@@ -1,9 +1,15 @@
-# whisper-live
-A nearly-live implementation of OpenAI's Whisper.
+# WhisperLive
-This project is a real-time transcription application that uses the OpenAI Whisper model to convert speech input into text output. It can be used to transcribe both live audio input from microphone and pre-recorded audio files.
+
+
+
A nearly-live implementation of OpenAI's Whisper.
+
+
-Unlike traditional speech recognition systems that rely on continuous audio streaming, we use [voice activity detection (VAD)](https://github.com/snakers4/silero-vad) to detect the presence of speech and only send the audio data to whisper when speech is detected. This helps to reduce the amount of data sent to the whisper model and improves the accuracy of the transcription output.
+This project is a real-time transcription application that uses the OpenAI Whisper model
+to convert speech input into text output. It can be used to transcribe both live audio
+input from microphone and pre-recorded audio files.
## Installation
- Install PyAudio and ffmpeg
@@ -50,7 +56,7 @@ python3 run_server.py -p 9090 \
### Running the Client
-- To transcribe an audio file:
+- Initializing the client:
```python
from whisper_live.client import TranscriptionClient
client = TranscriptionClient(
@@ -60,41 +66,27 @@ client = TranscriptionClient(
translate=False,
model="small"
)
+```
+It connects to the server running on localhost at port 9090. Using a multilingual model, language for the transcription will be automatically detected. You can also use the language option to specify the target language for the transcription, in this case, English ("en"). The translate option should be set to `True` if we want to translate from the source language to English and `False` if we want to transcribe in the source language.
+- Trancribe an audio file:
+```python
client("tests/jfk.wav")
```
-This command transcribes the specified audio file (audio.wav) using the Whisper model. It connects to the server running on localhost at port 9090. Using a multilingual model, language for the transcription will be automatically detected. You can also use the language option to specify the target language for the transcription, in this case, English ("en"). The translate option should be set to `True` if we want to translate from the source language to English and `False` if we want to transcribe in the source language.
- To transcribe from microphone:
```python
-from whisper_live.client import TranscriptionClient
-client = TranscriptionClient(
- "localhost",
- 9090,
- lang="hi",
- translate=True,
- model="small"
-)
client()
```
-This command captures audio from the microphone and sends it to the server for transcription. It uses the multilingual model with `hi` as the selected language. We use whisper `small` by default but can be changed to any other option based on the requirements and the hardware running the server.
- To transcribe from a HLS stream:
```python
-from whisper_live.client import TranscriptionClient
-client = TranscriptionClient(host, port, lang="en", translate=False)
client(hls_url="http://as-hls-ww-live.akamaized.net/pool_904/live/ww/bbc_1xtra/bbc_1xtra.isml/bbc_1xtra-audio%3d96000.norewind.m3u8")
```
-This command streams audio into the server from a HLS stream. It uses the same options as the previous command, using the multilingual model and specifying the target language and task.
-
-## Transcribe audio from browser
-- Run the server with your desired backend as shown [here](https://github.com/collabora/WhisperLive?tab=readme-ov-file#running-the-server)
-
-### Chrome Extension
-- Refer to [Audio-Transcription-Chrome](https://github.com/collabora/whisper-live/tree/main/Audio-Transcription-Chrome#readme) to use Chrome extension.
-### Firefox Extension
-- Refer to [Audio-Transcription-Firefox](https://github.com/collabora/whisper-live/tree/main/Audio-Transcription-Firefox#readme) to use Mozilla Firefox extension.
+## Browser Extensions
+- Run the server with your desired backend as shown [here](https://github.com/collabora/WhisperLive?tab=readme-ov-file#running-the-server).
+- Transcribe audio directly from your browser using our Chrome or Firefox extensions. Refer to [Audio-Transcription-Chrome](https://github.com/collabora/whisper-live/tree/main/Audio-Transcription-Chrome#readme) and [Audio-Transcription-Firefox](https://github.com/collabora/whisper-live/tree/main/Audio-Transcription-Firefox#readme) for setup instructions.
## Whisper Live Server in Docker
- GPU
@@ -140,6 +132,5 @@ We are available to help you with both Open Source and proprietary AI projects.
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/snakers4/silero-vad}},
- commit = {insert_some_commit_here},
email = {hello@silero.ai}
}