فهرست منبع

Initial Commit, all copied

Alexander Huwiler 8 ماه پیش
کامیت
08b011314b

+ 90 - 0
.github/workflows/ci.yml

@@ -0,0 +1,90 @@
+name: CI
+
+on:
+  push:
+    branches:
+      - master
+    tags:
+      - v*
+  pull_request:
+    branches:
+      - master
+
+jobs:
+  check-code-format:
+    runs-on: ubuntu-latest
+
+    steps:
+      - uses: actions/checkout@v4
+
+      - name: Set up Python 3.9
+        uses: actions/setup-python@v5
+        with:
+          python-version: 3.9
+
+      - name: Install module
+        run: |
+          pip install wheel
+          pip install -e .[dev]
+
+      - name: Check code format with Black
+        run: |
+          black --check .
+
+      - name: Check imports order with isort
+        run: |
+          isort --check-only .
+
+      - name: Check code style with Flake8
+        if: ${{ always() }}
+        run: |
+          flake8 .
+
+
+  run-tests:
+    runs-on: ubuntu-latest
+
+    steps:
+      - uses: actions/checkout@v4
+
+      - name: Set up Python 3.9
+        uses: actions/setup-python@v5
+        with:
+          python-version: 3.9
+
+      - name: Install module
+        run: |
+          pip install wheel
+          pip install -e .[dev]
+
+      - name: Run pytest
+        run: |
+          pytest -v tests/
+
+
+  build-and-push-package:
+    runs-on: ubuntu-latest
+    needs: [check-code-format, run-tests]
+
+    steps:
+      - uses: actions/checkout@v4
+
+      - name: Set up Python 3.9
+        uses: actions/setup-python@v5
+        with:
+          python-version: 3.9
+
+      - name: Install dependencies
+        run: |
+          pip install wheel
+
+      - name: Build package
+        run: |
+          python3 setup.py sdist bdist_wheel
+
+      - name: Push package on PyPI
+        if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags')
+        uses: pypa/gh-action-pypi-publish@release/v1
+        with:
+          user: __token__
+          password: ${{ secrets.PYPI_API_TOKEN }}

+ 15 - 0
.gitignore

@@ -0,0 +1,15 @@
+# Byte-compiled / Optimized / DLL Files
+*.pyc
+*.pyo
+*.pyd
+__pycache__/
+
+# Distribution / Packaging
+venv/
+
+# Unit Test
+.pytest_cache/
+
+# Ignore IDE, Editor Files
+.idea/
+.vscode/

+ 31 - 0
CONTRIBUTING.md

@@ -0,0 +1,31 @@
+# Contributing to faster-whisper
+
+Contributions are welcome! Here are some pointers to help you install the library for development and validate your changes before submitting a pull request.
+
+## Install the library for development
+
+We recommend installing the module in editable mode with the `dev` extra requirements:
+
+```bash
+git clone https://github.com/SYSTRAN/faster-whisper.git
+cd faster-whisper/
+pip install -e .[dev]
+```
+
+## Validate the changes before creating a pull request
+
+1. Make sure the existing tests are still passing (and consider adding new tests as well!):
+
+```bash
+pytest tests/
+```
+
+2. Reformat and validate the code with the following tools:
+
+```bash
+black .
+isort .
+flake8 .
+```
+
+These steps are also run automatically in the CI when you open the pull request.

+ 21 - 0
LICENSE

@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2023 SYSTRAN
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.

+ 4 - 0
MANIFEST.in

@@ -0,0 +1,4 @@
+include faster_whisper/assets/silero_encoder_v5.onnx
+include faster_whisper/assets/silero_decoder_v5.onnx
+include requirements.txt
+include requirements.conversion.txt

+ 296 - 0
README.md

@@ -0,0 +1,296 @@
+[![CI](https://github.com/SYSTRAN/faster-whisper/workflows/CI/badge.svg)](https://github.com/SYSTRAN/faster-whisper/actions?query=workflow%3ACI) [![PyPI version](https://badge.fury.io/py/faster-whisper.svg)](https://badge.fury.io/py/faster-whisper)
+
+# Faster Whisper transcription with CTranslate2
+
+**faster-whisper** is a reimplementation of OpenAI's Whisper model using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models.
+
+This implementation is up to 4 times faster than [openai/whisper](https://github.com/openai/whisper) for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU.
+
+## Benchmark
+
+### Whisper
+
+For reference, here's the time and memory usage that are required to transcribe [**13 minutes**](https://www.youtube.com/watch?v=0u7tTptBo9I) of audio using different implementations:
+
+* [openai/whisper](https://github.com/openai/whisper)@[v20240930](https://github.com/openai/whisper/tree/v20240930)
+* [whisper.cpp](https://github.com/ggerganov/whisper.cpp)@[v1.7.2](https://github.com/ggerganov/whisper.cpp/tree/v1.7.2)
+* [transformers](https://github.com/huggingface/transformers)@[v4.46.3](https://github.com/huggingface/transformers/tree/v4.46.3)
+* [faster-whisper](https://github.com/SYSTRAN/faster-whisper)@[v1.1.0](https://github.com/SYSTRAN/faster-whisper/tree/v1.1.0)
+
+### Large-v2 model on GPU
+
+| Implementation | Precision | Beam size | Time | VRAM Usage |
+| --- | --- | --- | --- | --- |
+| openai/whisper | fp16 | 5 | 2m23s | 4708MB |
+| whisper.cpp (Flash Attention) | fp16 | 5 | 1m05s | 4127MB |
+| transformers (SDPA)[^1] | fp16 | 5 | 1m52s | 4960MB |
+| faster-whisper | fp16 | 5 | 1m03s | 4525MB |
+| faster-whisper (`batch_size=8`) | fp16 | 5 | 17s | 6090MB |
+| faster-whisper | int8 | 5 | 59s | 2926MB |
+| faster-whisper (`batch_size=8`) | int8 | 5 | 16s | 4500MB |
+
+### distil-whisper-large-v3 model on GPU
+
+| Implementation | Precision | Beam size | Time | YT Commons WER |
+| --- | --- | --- | --- | --- |
+| transformers (SDPA) (`batch_size=16`) | fp16 | 5 | 46m12s | 14.801 |
+| faster-whisper (`batch_size=16`) | fp16 | 5 | 25m50s | 13.527 |
+
+*GPU Benchmarks are Executed with CUDA 12.4 on a NVIDIA RTX 3070 Ti 8GB.*
+[^1]: transformers OOM for any batch size > 1
+
+### Small model on CPU
+
+| Implementation | Precision | Beam size | Time | RAM Usage |
+| --- | --- | --- | --- | --- |
+| openai/whisper | fp32 | 5 | 6m58s | 2335MB |
+| whisper.cpp | fp32 | 5 | 2m05s | 1049MB |
+| whisper.cpp (OpenVINO) | fp32 | 5 | 1m45s | 1642MB |
+| faster-whisper | fp32 | 5 | 2m37s | 2257MB |
+| faster-whisper (`batch_size=8`) | fp32 | 5 | 1m06s | 4230MB |
+| faster-whisper | int8 | 5 | 1m42s | 1477MB |
+| faster-whisper (`batch_size=8`) | int8 | 5 | 51s | 3608MB |
+
+*Executed with 8 threads on an Intel Core i7-12700K.*
+
+
+## Requirements
+
+* Python 3.9 or greater
+
+Unlike openai-whisper, FFmpeg does **not** need to be installed on the system. The audio is decoded with the Python library [PyAV](https://github.com/PyAV-Org/PyAV) which bundles the FFmpeg libraries in its package.
+
+### GPU
+
+GPU execution requires the following NVIDIA libraries to be installed:
+
+* [cuBLAS for CUDA 12](https://developer.nvidia.com/cublas)
+* [cuDNN 9 for CUDA 12](https://developer.nvidia.com/cudnn)
+
+**Note**: The latest versions of `ctranslate2` only support CUDA 12 and cuDNN 9. For CUDA 11 and cuDNN 8, the current workaround is downgrading to the `3.24.0` version of `ctranslate2`, for CUDA 12 and cuDNN 8, downgrade to the `4.4.0` version of `ctranslate2`, (This can be done with `pip install --force-reinstall ctranslate2==4.4.0` or specifying the version in a `requirements.txt`).
+
+There are multiple ways to install the NVIDIA libraries mentioned above. The recommended way is described in the official NVIDIA documentation, but we also suggest other installation methods below. 
+
+<details>
+<summary>Other installation methods (click to expand)</summary>
+
+
+**Note:** For all these methods below, keep in mind the above note regarding CUDA versions. Depending on your setup, you may need to install the _CUDA 11_ versions of libraries that correspond to the CUDA 12 libraries listed in the instructions below.
+
+#### Use Docker
+
+The libraries (cuBLAS, cuDNN) are installed in this official NVIDIA CUDA Docker images: `nvidia/cuda:12.3.2-cudnn9-runtime-ubuntu22.04`.
+
+#### Install with `pip` (Linux only)
+
+On Linux these libraries can be installed with `pip`. Note that `LD_LIBRARY_PATH` must be set before launching Python.
+
+```bash
+pip install nvidia-cublas-cu12 nvidia-cudnn-cu12==9.*
+
+export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
+```
+
+#### Download the libraries from Purfview's repository (Windows & Linux)
+
+Purfview's [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) provides the required NVIDIA libraries for Windows & Linux in a [single archive](https://github.com/Purfview/whisper-standalone-win/releases/tag/libs). Decompress the archive and place the libraries in a directory included in the `PATH`.
+
+</details>
+
+## Installation
+
+The module can be installed from [PyPI](https://pypi.org/project/faster-whisper/):
+
+```bash
+pip install faster-whisper
+```
+
+<details>
+<summary>Other installation methods (click to expand)</summary>
+
+### Install the master branch
+
+```bash
+pip install --force-reinstall "faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/refs/heads/master.tar.gz"
+```
+
+### Install a specific commit
+
+```bash
+pip install --force-reinstall "faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/a4f1cc8f11433e454c3934442b5e1a4ed5e865c3.tar.gz"
+```
+
+</details>
+
+## Usage
+
+### Faster-whisper
+
+```python
+from faster_whisper import WhisperModel
+
+model_size = "large-v3"
+
+# Run on GPU with FP16
+model = WhisperModel(model_size, device="cuda", compute_type="float16")
+
+# or run on GPU with INT8
+# model = WhisperModel(model_size, device="cuda", compute_type="int8_float16")
+# or run on CPU with INT8
+# model = WhisperModel(model_size, device="cpu", compute_type="int8")
+
+segments, info = model.transcribe("audio.mp3", beam_size=5)
+
+print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
+
+for segment in segments:
+    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
+```
+
+**Warning:** `segments` is a *generator* so the transcription only starts when you iterate over it. The transcription can be run to completion by gathering the segments in a list or a `for` loop:
+
+```python
+segments, _ = model.transcribe("audio.mp3")
+segments = list(segments)  # The transcription will actually run here.
+```
+
+### Batched Transcription
+The following code snippet illustrates how to run batched transcription on an example audio file. `BatchedInferencePipeline.transcribe` is a drop-in replacement for `WhisperModel.transcribe`
+
+```python
+from faster_whisper import WhisperModel, BatchedInferencePipeline
+
+model = WhisperModel("turbo", device="cuda", compute_type="float16")
+batched_model = BatchedInferencePipeline(model=model)
+segments, info = batched_model.transcribe("audio.mp3", batch_size=16)
+
+for segment in segments:
+    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
+```
+
+### Faster Distil-Whisper
+
+The Distil-Whisper checkpoints are compatible with the Faster-Whisper package. In particular, the latest [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3)
+checkpoint is intrinsically designed to work with the Faster-Whisper transcription algorithm. The following code snippet 
+demonstrates how to run inference with distil-large-v3 on a specified audio file:
+
+```python
+from faster_whisper import WhisperModel
+
+model_size = "distil-large-v3"
+
+model = WhisperModel(model_size, device="cuda", compute_type="float16")
+segments, info = model.transcribe("audio.mp3", beam_size=5, language="en", condition_on_previous_text=False)
+
+for segment in segments:
+    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
+```
+
+For more information about the distil-large-v3 model, refer to the original [model card](https://huggingface.co/distil-whisper/distil-large-v3).
+
+### Word-level timestamps
+
+```python
+segments, _ = model.transcribe("audio.mp3", word_timestamps=True)
+
+for segment in segments:
+    for word in segment.words:
+        print("[%.2fs -> %.2fs] %s" % (word.start, word.end, word.word))
+```
+
+### VAD filter
+
+The library integrates the [Silero VAD](https://github.com/snakers4/silero-vad) model to filter out parts of the audio without speech:
+
+```python
+segments, _ = model.transcribe("audio.mp3", vad_filter=True)
+```
+
+The default behavior is conservative and only removes silence longer than 2 seconds. See the available VAD parameters and default values in the [source code](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/vad.py). They can be customized with the dictionary argument `vad_parameters`:
+
+```python
+segments, _ = model.transcribe(
+    "audio.mp3",
+    vad_filter=True,
+    vad_parameters=dict(min_silence_duration_ms=500),
+)
+```
+Vad filter is enabled by default for batched transcription.
+
+### Logging
+
+The library logging level can be configured like this:
+
+```python
+import logging
+
+logging.basicConfig()
+logging.getLogger("faster_whisper").setLevel(logging.DEBUG)
+```
+
+### Going further
+
+See more model and transcription options in the [`WhisperModel`](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/transcribe.py) class implementation.
+
+## Community integrations
+
+Here is a non exhaustive list of open-source projects using faster-whisper. Feel free to add your project to the list!
+
+
+* [speaches](https://github.com/speaches-ai/speaches) is an OpenAI compatible server using `faster-whisper`. It's easily deployable with Docker, works with OpenAI SDKs/CLI, supports streaming, and live transcription.
+* [WhisperX](https://github.com/m-bain/whisperX) is an award-winning Python library that offers speaker diarization and accurate word-level timestamps using wav2vec2 alignment
+* [whisper-ctranslate2](https://github.com/Softcatala/whisper-ctranslate2) is a command line client based on faster-whisper and compatible with the original client from openai/whisper.
+* [whisper-diarize](https://github.com/MahmoudAshraf97/whisper-diarization) is a speaker diarization tool that is based on faster-whisper and NVIDIA NeMo.
+* [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) Standalone CLI executables of faster-whisper for Windows, Linux & macOS. 
+* [asr-sd-pipeline](https://github.com/hedrergudene/asr-sd-pipeline) provides a scalable, modular, end to end multi-speaker speech to text solution implemented using AzureML pipelines.
+* [Open-Lyrics](https://github.com/zh-plus/Open-Lyrics) is a Python library that transcribes voice files using faster-whisper, and translates/polishes the resulting text into `.lrc` files in the desired language using OpenAI-GPT.
+* [wscribe](https://github.com/geekodour/wscribe) is a flexible transcript generation tool supporting faster-whisper, it can export word level transcript and the exported transcript then can be edited with [wscribe-editor](https://github.com/geekodour/wscribe-editor)
+* [aTrain](https://github.com/BANDAS-Center/aTrain) is a graphical user interface implementation of faster-whisper developed at the BANDAS-Center at the University of Graz for transcription and diarization in Windows ([Windows Store App](https://apps.microsoft.com/detail/atrain/9N15Q44SZNS2)) and Linux.
+* [Whisper-Streaming](https://github.com/ufal/whisper_streaming) implements real-time mode for offline Whisper-like speech-to-text models with faster-whisper as the most recommended back-end. It implements a streaming policy with self-adaptive latency based on the actual source complexity, and demonstrates the state of the art.
+* [WhisperLive](https://github.com/collabora/WhisperLive) is a nearly-live implementation of OpenAI's Whisper which uses faster-whisper as the backend to transcribe audio in real-time.
+* [Faster-Whisper-Transcriber](https://github.com/BBC-Esq/ctranslate2-faster-whisper-transcriber) is a simple but reliable voice transcriber that provides a user-friendly interface.
+* [Open-dubbing](https://github.com/softcatala/open-dubbing) is open dubbing is an AI dubbing system which uses machine learning models to automatically translate and synchronize audio dialogue into different languages.
+
+## Model conversion
+
+When loading a model from its size such as `WhisperModel("large-v3")`, the corresponding CTranslate2 model is automatically downloaded from the [Hugging Face Hub](https://huggingface.co/Systran).
+
+We also provide a script to convert any Whisper models compatible with the Transformers library. They could be the original OpenAI models or user fine-tuned models.
+
+For example the command below converts the [original "large-v3" Whisper model](https://huggingface.co/openai/whisper-large-v3) and saves the weights in FP16:
+
+```bash
+pip install transformers[torch]>=4.23
+
+ct2-transformers-converter --model openai/whisper-large-v3 --output_dir whisper-large-v3-ct2
+--copy_files tokenizer.json preprocessor_config.json --quantization float16
+```
+
+* The option `--model` accepts a model name on the Hub or a path to a model directory.
+* If the option `--copy_files tokenizer.json` is not used, the tokenizer configuration is automatically downloaded when the model is loaded later.
+
+Models can also be converted from the code. See the [conversion API](https://opennmt.net/CTranslate2/python/ctranslate2.converters.TransformersConverter.html).
+
+### Load a converted model
+
+1. Directly load the model from a local directory:
+```python
+model = faster_whisper.WhisperModel("whisper-large-v3-ct2")
+```
+
+2. [Upload your model to the Hugging Face Hub](https://huggingface.co/docs/transformers/model_sharing#upload-with-the-web-interface) and load it from its name:
+```python
+model = faster_whisper.WhisperModel("username/whisper-large-v3-ct2")
+```
+
+## Comparing performance against other implementations
+
+If you are comparing the performance against other Whisper implementations, you should make sure to run the comparison with similar settings. In particular:
+
+* Verify that the same transcription options are used, especially the same beam size. For example in openai/whisper, `model.transcribe` uses a default beam size of 1 but here we use a default beam size of 5.
+* Transcription speed is closely affected by the number of words in the transcript, so ensure that other implementations have a similar WER (Word Error Rate) to this one.
+* When running on CPU, make sure to set the same number of threads. Many frameworks will read the environment variable `OMP_NUM_THREADS`, which can be set when running your script:
+
+```bash
+OMP_NUM_THREADS=4 python3 my_script.py
+```

BIN
benchmark/benchmark.m4a


+ 80 - 0
benchmark/evaluate_yt_commons.py

@@ -0,0 +1,80 @@
+import argparse
+import json
+import os
+
+from io import BytesIO
+
+from datasets import load_dataset
+from jiwer import wer
+from pytubefix import YouTube
+from pytubefix.exceptions import VideoUnavailable
+from tqdm import tqdm
+from transformers.models.whisper.english_normalizer import EnglishTextNormalizer
+
+from faster_whisper import BatchedInferencePipeline, WhisperModel, decode_audio
+
+
+def url_to_audio(row):
+    buffer = BytesIO()
+    yt = YouTube(row["link"])
+    try:
+        video = (
+            yt.streams.filter(only_audio=True, mime_type="audio/mp4")
+            .order_by("bitrate")
+            .desc()
+            .last()
+        )
+        video.stream_to_buffer(buffer)
+        buffer.seek(0)
+        row["audio"] = decode_audio(buffer)
+    except VideoUnavailable:
+        print(f'Failed to download: {row["link"]}')
+        row["audio"] = []
+    return row
+
+
+parser = argparse.ArgumentParser(description="WER benchmark")
+parser.add_argument(
+    "--audio_numb",
+    type=int,
+    default=None,
+    help="Specify the number of validation audio files in the dataset."
+    " Set to None to retrieve all audio files.",
+)
+args = parser.parse_args()
+
+with open(os.path.join(os.path.dirname(__file__), "normalizer.json"), "r") as f:
+    normalizer = EnglishTextNormalizer(json.load(f))
+
+dataset = load_dataset("mobiuslabsgmbh/youtube-commons-asr-eval", streaming=True).map(
+    url_to_audio
+)
+model = WhisperModel("large-v3", device="cuda")
+pipeline = BatchedInferencePipeline(model, device="cuda")
+
+
+all_transcriptions = []
+all_references = []
+# iterate over the dataset and run inference
+for i, row in tqdm(enumerate(dataset["test"]), desc="Evaluating..."):
+    if not row["audio"]:
+        continue
+    result, info = pipeline.transcribe(
+        row["audio"][0],
+        batch_size=8,
+        word_timestamps=False,
+        without_timestamps=True,
+    )
+
+    all_transcriptions.append("".join(segment.text for segment in result))
+    all_references.append(row["text"][0])
+    if args.audio_numb and i == (args.audio_numb - 1):
+        break
+
+# normalize predictions and references
+all_transcriptions = [normalizer(transcription) for transcription in all_transcriptions]
+all_references = [normalizer(reference) for reference in all_references]
+
+# compute the WER metric
+word_error_rate = 100 * wer(hypothesis=all_transcriptions, reference=all_references)
+print("WER: %.3f" % word_error_rate)

+ 94 - 0
benchmark/memory_benchmark.py

@@ -0,0 +1,94 @@
+import argparse
+import time
+
+from typing import Callable
+
+import py3nvml.py3nvml as nvml
+
+from memory_profiler import memory_usage
+from utils import MyThread, get_logger, inference
+
+logger = get_logger("faster-whisper")
+parser = argparse.ArgumentParser(description="Memory benchmark")
+parser.add_argument(
+    "--gpu_memory", action="store_true", help="Measure GPU memory usage"
+)
+parser.add_argument("--device-index", type=int, default=0, help="GPU device index")
+parser.add_argument(
+    "--interval",
+    type=float,
+    default=0.5,
+    help="Interval at which measurements are collected",
+)
+args = parser.parse_args()
+device_idx = args.device_index
+interval = args.interval
+
+
+def measure_memory(func: Callable[[], None]):
+    if args.gpu_memory:
+        logger.info(
+            "Measuring maximum GPU memory usage on GPU device."
+            " Make sure to not have additional processes running on the same GPU."
+        )
+        # init nvml
+        nvml.nvmlInit()
+        handle = nvml.nvmlDeviceGetHandleByIndex(device_idx)
+        gpu_name = nvml.nvmlDeviceGetName(handle)
+        gpu_memory_limit = nvml.nvmlDeviceGetMemoryInfo(handle).total >> 20
+        gpu_power_limit = nvml.nvmlDeviceGetPowerManagementLimit(handle) / 1000.0
+        info = {"gpu_memory_usage": [], "gpu_power_usage": []}
+
+        def _get_gpu_info():
+            while True:
+                info["gpu_memory_usage"].append(
+                    nvml.nvmlDeviceGetMemoryInfo(handle).used >> 20
+                )
+                info["gpu_power_usage"].append(
+                    nvml.nvmlDeviceGetPowerUsage(handle) / 1000
+                )
+                time.sleep(interval)
+
+                if stop:
+                    break
+
+            return info
+
+        stop = False
+        thread = MyThread(_get_gpu_info, params=())
+        thread.start()
+        func()
+        stop = True
+        thread.join()
+        result = thread.get_result()
+
+        # shutdown nvml
+        nvml.nvmlShutdown()
+        max_memory_usage = max(result["gpu_memory_usage"])
+        max_power_usage = max(result["gpu_power_usage"])
+        print("GPU name: %s" % gpu_name)
+        print("GPU device index: %s" % device_idx)
+        print(
+            "Maximum GPU memory usage: %dMiB / %dMiB (%.2f%%)"
+            % (
+                max_memory_usage,
+                gpu_memory_limit,
+                (max_memory_usage / gpu_memory_limit) * 100,
+            )
+        )
+        print(
+            "Maximum GPU power usage: %dW / %dW (%.2f%%)"
+            % (
+                max_power_usage,
+                gpu_power_limit,
+                (max_power_usage / gpu_power_limit) * 100,
+            )
+        )
+    else:
+        logger.info("Measuring maximum increase of memory usage.")
+        max_usage = memory_usage(func, max_usage=True, interval=interval)
+        print("Maximum increase of RAM memory usage: %d MiB" % max_usage)
+
+
+if __name__ == "__main__":
+    measure_memory(inference)

+ 1742 - 0
benchmark/normalizer.json

@@ -0,0 +1,1742 @@
+{
+  "accessorise": "accessorize",
+  "accessorised": "accessorized",
+  "accessorises": "accessorizes",
+  "accessorising": "accessorizing",
+  "acclimatisation": "acclimatization",
+  "acclimatise": "acclimatize",
+  "acclimatised": "acclimatized",
+  "acclimatises": "acclimatizes",
+  "acclimatising": "acclimatizing",
+  "accoutrements": "accouterments",
+  "aeon": "eon",
+  "aeons": "eons",
+  "aerogramme": "aerogram",
+  "aerogrammes": "aerograms",
+  "aeroplane": "airplane",
+  "aeroplanes": "airplanes",
+  "aesthete": "esthete",
+  "aesthetes": "esthetes",
+  "aesthetic": "esthetic",
+  "aesthetically": "esthetically",
+  "aesthetics": "esthetics",
+  "aetiology": "etiology",
+  "ageing": "aging",
+  "aggrandisement": "aggrandizement",
+  "agonise": "agonize",
+  "agonised": "agonized",
+  "agonises": "agonizes",
+  "agonising": "agonizing",
+  "agonisingly": "agonizingly",
+  "almanack": "almanac",
+  "almanacks": "almanacs",
+  "aluminium": "aluminum",
+  "amortisable": "amortizable",
+  "amortisation": "amortization",
+  "amortisations": "amortizations",
+  "amortise": "amortize",
+  "amortised": "amortized",
+  "amortises": "amortizes",
+  "amortising": "amortizing",
+  "amphitheatre": "amphitheater",
+  "amphitheatres": "amphitheaters",
+  "anaemia": "anemia",
+  "anaemic": "anemic",
+  "anaesthesia": "anesthesia",
+  "anaesthetic": "anesthetic",
+  "anaesthetics": "anesthetics",
+  "anaesthetise": "anesthetize",
+  "anaesthetised": "anesthetized",
+  "anaesthetises": "anesthetizes",
+  "anaesthetising": "anesthetizing",
+  "anaesthetist": "anesthetist",
+  "anaesthetists": "anesthetists",
+  "anaesthetize": "anesthetize",
+  "anaesthetized": "anesthetized",
+  "anaesthetizes": "anesthetizes",
+  "anaesthetizing": "anesthetizing",
+  "analogue": "analog",
+  "analogues": "analogs",
+  "analyse": "analyze",
+  "analysed": "analyzed",
+  "analyses": "analyzes",
+  "analysing": "analyzing",
+  "anglicise": "anglicize",
+  "anglicised": "anglicized",
+  "anglicises": "anglicizes",
+  "anglicising": "anglicizing",
+  "annualised": "annualized",
+  "antagonise": "antagonize",
+  "antagonised": "antagonized",
+  "antagonises": "antagonizes",
+  "antagonising": "antagonizing",
+  "apologise": "apologize",
+  "apologised": "apologized",
+  "apologises": "apologizes",
+  "apologising": "apologizing",
+  "appal": "appall",
+  "appals": "appalls",
+  "appetiser": "appetizer",
+  "appetisers": "appetizers",
+  "appetising": "appetizing",
+  "appetisingly": "appetizingly",
+  "arbour": "arbor",
+  "arbours": "arbors",
+  "archaeologically": "archeologically",
+  "archaeologist": "archeologist",
+  "archaeologists": "archeologists",
+  "archaeology": "archeology</span>",
+  "archeological": "archaeological",
+  "ardour": "ardor",
+  "armour": "armor",
+  "armoured": "armored",
+  "armourer": "armorer",
+  "armourers": "armorers",
+  "armouries": "armories",
+  "armoury": "armory",
+  "artefact": "artifact",
+  "artefacts": "artifacts",
+  "authorise": "authorize",
+  "authorised": "authorized",
+  "authorises": "authorizes",
+  "authorising": "authorizing",
+  "axe": "ax",
+  "backpedalled": "backpedaled",
+  "backpedalling": "backpedaling",
+  "bannister": "banister",
+  "bannisters": "banisters",
+  "baptise": "baptize",
+  "baptised": "baptized",
+  "baptises": "baptizes",
+  "baptising": "baptizing",
+  "bastardise": "bastardize",
+  "bastardised": "bastardized",
+  "bastardises": "bastardizes",
+  "bastardising": "bastardizing",
+  "battleax": "battleaxe",
+  "baulk": "balk",
+  "baulked": "balked",
+  "baulking": "balking",
+  "baulks": "balks",
+  "bedevilled": "bedeviled",
+  "bedevilling": "bedeviling",
+  "behaviour": "behavior",
+  "behavioural": "behavioral",
+  "behaviourism": "behaviorism",
+  "behaviourist": "behaviorist",
+  "behaviourists": "behaviorists",
+  "behaviours": "behaviors",
+  "behove": "behoove",
+  "behoved": "behooved",
+  "behoves": "behooves",
+  "bejewelled": "bejeweled",
+  "belabour": "belabor",
+  "belaboured": "belabored",
+  "belabouring": "belaboring",
+  "belabours": "belabors",
+  "bevelled": "beveled",
+  "bevvies": "bevies",
+  "bevvy": "bevy",
+  "biassed": "biased",
+  "biassing": "biasing",
+  "bingeing": "binging",
+  "bougainvillaea": "bougainvillea",
+  "bougainvillaeas": "bougainvilleas",
+  "bowdlerise": "bowdlerize",
+  "bowdlerised": "bowdlerized",
+  "bowdlerises": "bowdlerizes",
+  "bowdlerising": "bowdlerizing",
+  "breathalyse": "breathalyze",
+  "breathalysed": "breathalyzed",
+  "breathalyser": "breathalyzer",
+  "breathalysers": "breathalyzers",
+  "breathalyses": "breathalyzes",
+  "breathalysing": "breathalyzing",
+  "brutalise": "brutalize",
+  "brutalised": "brutalized",
+  "brutalises": "brutalizes",
+  "brutalising": "brutalizing",
+  "busses": "buses",
+  "bussing": "busing",
+  "caesarean": "cesarean",
+  "caesareans": "cesareans",
+  "calibre": "caliber",
+  "calibres": "calibers",
+  "calliper": "caliper",
+  "callipers": "calipers",
+  "callisthenics": "calisthenics",
+  "canalise": "canalize",
+  "canalised": "canalized",
+  "canalises": "canalizes",
+  "canalising": "canalizing",
+  "cancelation": "cancellation",
+  "cancelations": "cancellations",
+  "cancelled": "canceled",
+  "cancelling": "canceling",
+  "candour": "candor",
+  "cannibalise": "cannibalize",
+  "cannibalised": "cannibalized",
+  "cannibalises": "cannibalizes",
+  "cannibalising": "cannibalizing",
+  "canonise": "canonize",
+  "canonised": "canonized",
+  "canonises": "canonizes",
+  "canonising": "canonizing",
+  "capitalise": "capitalize",
+  "capitalised": "capitalized",
+  "capitalises": "capitalizes",
+  "capitalising": "capitalizing",
+  "caramelise": "caramelize",
+  "caramelised": "caramelized",
+  "caramelises": "caramelizes",
+  "caramelising": "caramelizing",
+  "carbonise": "carbonize",
+  "carbonised": "carbonized",
+  "carbonises": "carbonizes",
+  "carbonising": "carbonizing",
+  "carolled": "caroled",
+  "carolling": "caroling",
+  "catalogue": "catalog",
+  "catalogued": "cataloged",
+  "catalogues": "catalogs",
+  "cataloguing": "cataloging",
+  "catalyse": "catalyze",
+  "catalysed": "catalyzed",
+  "catalyses": "catalyzes",
+  "catalysing": "catalyzing",
+  "categorise": "categorize",
+  "categorised": "categorized",
+  "categorises": "categorizes",
+  "categorising": "categorizing",
+  "cauterise": "cauterize",
+  "cauterised": "cauterized",
+  "cauterises": "cauterizes",
+  "cauterising": "cauterizing",
+  "cavilled": "caviled",
+  "cavilling": "caviling",
+  "centigramme": "centigram",
+  "centigrammes": "centigrams",
+  "centilitre": "centiliter",
+  "centilitres": "centiliters",
+  "centimetre": "centimeter",
+  "centimetres": "centimeters",
+  "centralise": "centralize",
+  "centralised": "centralized",
+  "centralises": "centralizes",
+  "centralising": "centralizing",
+  "centre": "center",
+  "centred": "centered",
+  "centrefold": "centerfold",
+  "centrefolds": "centerfolds",
+  "centrepiece": "centerpiece",
+  "centrepieces": "centerpieces",
+  "centres": "centers",
+  "channelled": "channeled",
+  "channelling": "channeling",
+  "characterise": "characterize",
+  "characterised": "characterized",
+  "characterises": "characterizes",
+  "characterising": "characterizing",
+  "cheque": "check",
+  "chequebook": "checkbook",
+  "chequebooks": "checkbooks",
+  "chequered": "checkered",
+  "cheques": "checks",
+  "chilli": "chili",
+  "chimaera": "chimera",
+  "chimaeras": "chimeras",
+  "chiselled": "chiseled",
+  "chiselling": "chiseling",
+  "circularise": "circularize",
+  "circularised": "circularized",
+  "circularises": "circularizes",
+  "circularising": "circularizing",
+  "civilise": "civilize",
+  "civilised": "civilized",
+  "civilises": "civilizes",
+  "civilising": "civilizing",
+  "clamour": "clamor",
+  "clamoured": "clamored",
+  "clamouring": "clamoring",
+  "clamours": "clamors",
+  "clangour": "clangor",
+  "clarinettist": "clarinetist",
+  "clarinettists": "clarinetists",
+  "collectivise": "collectivize",
+  "collectivised": "collectivized",
+  "collectivises": "collectivizes",
+  "collectivising": "collectivizing",
+  "colonisation": "colonization",
+  "colonise": "colonize",
+  "colonised": "colonized",
+  "coloniser": "colonizer",
+  "colonisers": "colonizers",
+  "colonises": "colonizes",
+  "colonising": "colonizing",
+  "colour": "color",
+  "colourant": "colorant",
+  "colourants": "colorants",
+  "coloured": "colored",
+  "coloureds": "coloreds",
+  "colourful": "colorful",
+  "colourfully": "colorfully",
+  "colouring": "coloring",
+  "colourize": "colorize",
+  "colourized": "colorized",
+  "colourizes": "colorizes",
+  "colourizing": "colorizing",
+  "colourless": "colorless",
+  "colours": "colors",
+  "commercialise": "commercialize",
+  "commercialised": "commercialized",
+  "commercialises": "commercializes",
+  "commercialising": "commercializing",
+  "compartmentalise": "compartmentalize",
+  "compartmentalised": "compartmentalized",
+  "compartmentalises": "compartmentalizes",
+  "compartmentalising": "compartmentalizing",
+  "computerise": "computerize",
+  "computerised": "computerized",
+  "computerises": "computerizes",
+  "computerising": "computerizing",
+  "conceptualise": "conceptualize",
+  "conceptualised": "conceptualized",
+  "conceptualises": "conceptualizes",
+  "conceptualising": "conceptualizing",
+  "connexion": "connection",
+  "connexions": "connections",
+  "contextualise": "contextualize",
+  "contextualised": "contextualized",
+  "contextualises": "contextualizes",
+  "contextualising": "contextualizing",
+  "cosier": "cozier",
+  "cosies": "cozies",
+  "cosiest": "coziest",
+  "cosily": "cozily",
+  "cosiness": "coziness",
+  "cosy": "cozy",
+  "councillor": "councilor",
+  "councillors": "councilors",
+  "counselled": "counseled",
+  "counselling": "counseling",
+  "counsellor": "counselor",
+  "counsellors": "counselors",
+  "crenelated": "crenellated",
+  "criminalise": "criminalize",
+  "criminalised": "criminalized",
+  "criminalises": "criminalizes",
+  "criminalising": "criminalizing",
+  "criticise": "criticize",
+  "criticised": "criticized",
+  "criticises": "criticizes",
+  "criticising": "criticizing",
+  "crueller": "crueler",
+  "cruellest": "cruelest",
+  "crystallisation": "crystallization",
+  "crystallise": "crystallize",
+  "crystallised": "crystallized",
+  "crystallises": "crystallizes",
+  "crystallising": "crystallizing",
+  "cudgelled": "cudgeled",
+  "cudgelling": "cudgeling",
+  "customise": "customize",
+  "customised": "customized",
+  "customises": "customizes",
+  "customising": "customizing",
+  "cypher": "cipher",
+  "cyphers": "ciphers",
+  "decentralisation": "decentralization",
+  "decentralise": "decentralize",
+  "decentralised": "decentralized",
+  "decentralises": "decentralizes",
+  "decentralising": "decentralizing",
+  "decriminalisation": "decriminalization",
+  "decriminalise": "decriminalize",
+  "decriminalised": "decriminalized",
+  "decriminalises": "decriminalizes",
+  "decriminalising": "decriminalizing",
+  "defence": "defense",
+  "defenceless": "defenseless",
+  "defences": "defenses",
+  "dehumanisation": "dehumanization",
+  "dehumanise": "dehumanize",
+  "dehumanised": "dehumanized",
+  "dehumanises": "dehumanizes",
+  "dehumanising": "dehumanizing",
+  "demeanour": "demeanor",
+  "demilitarisation": "demilitarization",
+  "demilitarise": "demilitarize",
+  "demilitarised": "demilitarized",
+  "demilitarises": "demilitarizes",
+  "demilitarising": "demilitarizing",
+  "demobilisation": "demobilization",
+  "demobilise": "demobilize",
+  "demobilised": "demobilized",
+  "demobilises": "demobilizes",
+  "demobilising": "demobilizing",
+  "democratisation": "democratization",
+  "democratise": "democratize",
+  "democratised": "democratized",
+  "democratises": "democratizes",
+  "democratising": "democratizing",
+  "demonise": "demonize",
+  "demonised": "demonized",
+  "demonises": "demonizes",
+  "demonising": "demonizing",
+  "demoralisation": "demoralization",
+  "demoralise": "demoralize",
+  "demoralised": "demoralized",
+  "demoralises": "demoralizes",
+  "demoralising": "demoralizing",
+  "denationalisation": "denationalization",
+  "denationalise": "denationalize",
+  "denationalised": "denationalized",
+  "denationalises": "denationalizes",
+  "denationalising": "denationalizing",
+  "deodorise": "deodorize",
+  "deodorised": "deodorized",
+  "deodorises": "deodorizes",
+  "deodorising": "deodorizing",
+  "depersonalise": "depersonalize",
+  "depersonalised": "depersonalized",
+  "depersonalises": "depersonalizes",
+  "depersonalising": "depersonalizing",
+  "deputise": "deputize",
+  "deputised": "deputized",
+  "deputises": "deputizes",
+  "deputising": "deputizing",
+  "desensitisation": "desensitization",
+  "desensitise": "desensitize",
+  "desensitised": "desensitized",
+  "desensitises": "desensitizes",
+  "desensitising": "desensitizing",
+  "destabilisation": "destabilization",
+  "destabilise": "destabilize",
+  "destabilised": "destabilized",
+  "destabilises": "destabilizes",
+  "destabilising": "destabilizing",
+  "dialled": "dialed",
+  "dialling": "dialing",
+  "dialogue": "dialog",
+  "dialogues": "dialogs",
+  "diarrhoea": "diarrhea",
+  "digitise": "digitize",
+  "digitised": "digitized",
+  "digitises": "digitizes",
+  "digitising": "digitizing",
+  "disc": "disk",
+  "discolour": "discolor",
+  "discoloured": "discolored",
+  "discolouring": "discoloring",
+  "discolours": "discolors",
+  "discs": "disks",
+  "disembowelled": "disemboweled",
+  "disembowelling": "disemboweling",
+  "disfavour": "disfavor",
+  "dishevelled": "disheveled",
+  "dishonour": "dishonor",
+  "dishonourable": "dishonorable",
+  "dishonourably": "dishonorably",
+  "dishonoured": "dishonored",
+  "dishonouring": "dishonoring",
+  "dishonours": "dishonors",
+  "disorganisation": "disorganization",
+  "disorganised": "disorganized",
+  "distil": "distill",
+  "distils": "distills",
+  "dramatisation": "dramatization",
+  "dramatisations": "dramatizations",
+  "dramatise": "dramatize",
+  "dramatised": "dramatized",
+  "dramatises": "dramatizes",
+  "dramatising": "dramatizing",
+  "draught": "draft",
+  "draughtboard": "draftboard",
+  "draughtboards": "draftboards",
+  "draughtier": "draftier",
+  "draughtiest": "draftiest",
+  "draughts": "drafts",
+  "draughtsman": "draftsman",
+  "draughtsmanship": "draftsmanship",
+  "draughtsmen": "draftsmen",
+  "draughtswoman": "draftswoman",
+  "draughtswomen": "draftswomen",
+  "draughty": "drafty",
+  "drivelled": "driveled",
+  "drivelling": "driveling",
+  "duelled": "dueled",
+  "duelling": "dueling",
+  "economise": "economize",
+  "economised": "economized",
+  "economises": "economizes",
+  "economising": "economizing",
+  "editorialise": "editorialize",
+  "editorialised": "editorialized",
+  "editorialises": "editorializes",
+  "editorialising": "editorializing",
+  "edoema": "edema",
+  "empathise": "empathize",
+  "empathised": "empathized",
+  "empathises": "empathizes",
+  "empathising": "empathizing",
+  "emphasise": "emphasize",
+  "emphasised": "emphasized",
+  "emphasises": "emphasizes",
+  "emphasising": "emphasizing",
+  "enamelled": "enameled",
+  "enamelling": "enameling",
+  "enamoured": "enamored",
+  "encyclopaedia": "encyclopedia",
+  "encyclopaedias": "encyclopedias",
+  "encyclopaedic": "encyclopedic",
+  "endeavour": "endeavor",
+  "endeavoured": "endeavored",
+  "endeavouring": "endeavoring",
+  "endeavours": "endeavors",
+  "energise": "energize",
+  "energised": "energized",
+  "energises": "energizes",
+  "energising": "energizing",
+  "enrol": "enroll",
+  "enrols": "enrolls",
+  "enthral": "enthrall",
+  "enthrals": "enthralls",
+  "epaulette": "epaulet",
+  "epaulettes": "epaulets",
+  "epicentre": "epicenter",
+  "epicentres": "epicenters",
+  "epilogue": "epilog",
+  "epilogues": "epilogs",
+  "epitomise": "epitomize",
+  "epitomised": "epitomized",
+  "epitomises": "epitomizes",
+  "epitomising": "epitomizing",
+  "equalisation": "equalization",
+  "equalise": "equalize",
+  "equalised": "equalized",
+  "equaliser": "equalizer",
+  "equalisers": "equalizers",
+  "equalises": "equalizes",
+  "equalising": "equalizing",
+  "eulogise": "eulogize",
+  "eulogised": "eulogized",
+  "eulogises": "eulogizes",
+  "eulogising": "eulogizing",
+  "evangelise": "evangelize",
+  "evangelised": "evangelized",
+  "evangelises": "evangelizes",
+  "evangelising": "evangelizing",
+  "exorcise": "exorcize",
+  "exorcised": "exorcized",
+  "exorcises": "exorcizes",
+  "exorcising": "exorcizing",
+  "extemporisation": "extemporization",
+  "extemporise": "extemporize",
+  "extemporised": "extemporized",
+  "extemporises": "extemporizes",
+  "extemporising": "extemporizing",
+  "externalisation": "externalization",
+  "externalisations": "externalizations",
+  "externalise": "externalize",
+  "externalised": "externalized",
+  "externalises": "externalizes",
+  "externalising": "externalizing",
+  "factorise": "factorize",
+  "factorised": "factorized",
+  "factorises": "factorizes",
+  "factorising": "factorizing",
+  "faecal": "fecal",
+  "faeces": "feces",
+  "familiarisation": "familiarization",
+  "familiarise": "familiarize",
+  "familiarised": "familiarized",
+  "familiarises": "familiarizes",
+  "familiarising": "familiarizing",
+  "fantasise": "fantasize",
+  "fantasised": "fantasized",
+  "fantasises": "fantasizes",
+  "fantasising": "fantasizing",
+  "favour": "favor",
+  "favourable": "favorable",
+  "favourably": "favorably",
+  "favoured": "favored",
+  "favouring": "favoring",
+  "favourite": "favorite",
+  "favourites": "favorites",
+  "favouritism": "favoritism",
+  "favours": "favors",
+  "feminise": "feminize",
+  "feminised": "feminized",
+  "feminises": "feminizes",
+  "feminising": "feminizing",
+  "fertilisation": "fertilization",
+  "fertilise": "fertilize",
+  "fertilised": "fertilized",
+  "fertiliser": "fertilizer",
+  "fertilisers": "fertilizers",
+  "fertilises": "fertilizes",
+  "fertilising": "fertilizing",
+  "fervour": "fervor",
+  "fibre": "fiber",
+  "fibreglass": "fiberglass",
+  "fibres": "fibers",
+  "fictionalisation": "fictionalization",
+  "fictionalisations": "fictionalizations",
+  "fictionalise": "fictionalize",
+  "fictionalised": "fictionalized",
+  "fictionalises": "fictionalizes",
+  "fictionalising": "fictionalizing",
+  "fillet": "filet",
+  "filleted": "fileted",
+  "filleting": "fileting",
+  "fillets": "filets",
+  "finalisation": "finalization",
+  "finalise": "finalize",
+  "finalised": "finalized",
+  "finalises": "finalizes",
+  "finalising": "finalizing",
+  "flautist": "flutist",
+  "flautists": "flutists",
+  "flavour": "flavor",
+  "flavoured": "flavored",
+  "flavouring": "flavoring",
+  "flavourings": "flavorings",
+  "flavourless": "flavorless",
+  "flavours": "flavors",
+  "flavoursome": "flavorsome",
+  "flyer / flier": "flier / flyer",
+  "foetal": "fetal",
+  "foetid": "fetid",
+  "foetus": "fetus",
+  "foetuses": "fetuses",
+  "formalisation": "formalization",
+  "formalise": "formalize",
+  "formalised": "formalized",
+  "formalises": "formalizes",
+  "formalising": "formalizing",
+  "fossilisation": "fossilization",
+  "fossilise": "fossilize",
+  "fossilised": "fossilized",
+  "fossilises": "fossilizes",
+  "fossilising": "fossilizing",
+  "fraternisation": "fraternization",
+  "fraternise": "fraternize",
+  "fraternised": "fraternized",
+  "fraternises": "fraternizes",
+  "fraternising": "fraternizing",
+  "fulfil": "fulfill",
+  "fulfilment": "fulfillment",
+  "fulfils": "fulfills",
+  "funnelled": "funneled",
+  "funnelling": "funneling",
+  "gage": "gauge",
+  "gaged": "gauged",
+  "gages": "gauges",
+  "gaging": "gauging",
+  "galvanise": "galvanize",
+  "galvanised": "galvanized",
+  "galvanises": "galvanizes",
+  "galvanising": "galvanizing",
+  "gambolled": "gamboled",
+  "gambolling": "gamboling",
+  "gaol": "jail",
+  "gaolbird": "jailbird",
+  "gaolbirds": "jailbirds",
+  "gaolbreak": "jailbreak",
+  "gaolbreaks": "jailbreaks",
+  "gaoled": "jailed",
+  "gaoler": "jailer",
+  "gaolers": "jailers",
+  "gaoling": "jailing",
+  "gaols": "jails",
+  "gasses": "gases",
+  "generalisation": "generalization",
+  "generalisations": "generalizations",
+  "generalise": "generalize",
+  "generalised": "generalized",
+  "generalises": "generalizes",
+  "generalising": "generalizing",
+  "ghettoise": "ghettoize",
+  "ghettoised": "ghettoized",
+  "ghettoises": "ghettoizes",
+  "ghettoising": "ghettoizing",
+  "gipsies": "gypsies",
+  "glamor": "glamour",
+  "glamorise": "glamorize",
+  "glamorised": "glamorized",
+  "glamorises": "glamorizes",
+  "glamorising": "glamorizing",
+  "globalisation": "globalization",
+  "globalise": "globalize",
+  "globalised": "globalized",
+  "globalises": "globalizes",
+  "globalising": "globalizing",
+  "glueing": "gluing",
+  "goitre": "goiter",
+  "goitres": "goiters",
+  "gonorrhoea": "gonorrhea",
+  "gramme": "gram",
+  "grammes": "grams",
+  "gravelled": "graveled",
+  "grey": "gray",
+  "greyed": "grayed",
+  "greying": "graying",
+  "greyish": "grayish",
+  "greyness": "grayness",
+  "greys": "grays",
+  "grovelled": "groveled",
+  "grovelling": "groveling",
+  "groyne": "groin",
+  "groynes": "groins",
+  "gruelling": "grueling",
+  "gruellingly": "gruelingly",
+  "gryphon": "griffin",
+  "gryphons": "griffins",
+  "gynaecological": "gynecological",
+  "gynaecologist": "gynecologist",
+  "gynaecologists": "gynecologists",
+  "gynaecology": "gynecology",
+  "haematological": "hematological",
+  "haematologist": "hematologist",
+  "haematologists": "hematologists",
+  "haematology": "hematology",
+  "haemoglobin": "hemoglobin",
+  "haemophilia": "hemophilia",
+  "haemophiliac": "hemophiliac",
+  "haemophiliacs": "hemophiliacs",
+  "haemorrhage": "hemorrhage",
+  "haemorrhaged": "hemorrhaged",
+  "haemorrhages": "hemorrhages",
+  "haemorrhaging": "hemorrhaging",
+  "haemorrhoids": "hemorrhoids",
+  "harbour": "harbor",
+  "harboured": "harbored",
+  "harbouring": "harboring",
+  "harbours": "harbors",
+  "harmonisation": "harmonization",
+  "harmonise": "harmonize",
+  "harmonised": "harmonized",
+  "harmonises": "harmonizes",
+  "harmonising": "harmonizing",
+  "homoeopath": "homeopath",
+  "homoeopathic": "homeopathic",
+  "homoeopaths": "homeopaths",
+  "homoeopathy": "homeopathy",
+  "homogenise": "homogenize",
+  "homogenised": "homogenized",
+  "homogenises": "homogenizes",
+  "homogenising": "homogenizing",
+  "honour": "honor",
+  "honourable": "honorable",
+  "honourably": "honorably",
+  "honoured": "honored",
+  "honouring": "honoring",
+  "honours": "honors",
+  "hospitalisation": "hospitalization",
+  "hospitalise": "hospitalize",
+  "hospitalised": "hospitalized",
+  "hospitalises": "hospitalizes",
+  "hospitalising": "hospitalizing",
+  "humanise": "humanize",
+  "humanised": "humanized",
+  "humanises": "humanizes",
+  "humanising": "humanizing",
+  "humour": "humor",
+  "humoured": "humored",
+  "humouring": "humoring",
+  "humourless": "humorless",
+  "humours": "humors",
+  "hybridise": "hybridize",
+  "hybridised": "hybridized",
+  "hybridises": "hybridizes",
+  "hybridising": "hybridizing",
+  "hypnotise": "hypnotize",
+  "hypnotised": "hypnotized",
+  "hypnotises": "hypnotizes",
+  "hypnotising": "hypnotizing",
+  "hypothesise": "hypothesize",
+  "hypothesised": "hypothesized",
+  "hypothesises": "hypothesizes",
+  "hypothesising": "hypothesizing",
+  "idealisation": "idealization",
+  "idealise": "idealize",
+  "idealised": "idealized",
+  "idealises": "idealizes",
+  "idealising": "idealizing",
+  "idolise": "idolize",
+  "idolised": "idolized",
+  "idolises": "idolizes",
+  "idolising": "idolizing",
+  "immobilisation": "immobilization",
+  "immobilise": "immobilize",
+  "immobilised": "immobilized",
+  "immobiliser": "immobilizer",
+  "immobilisers": "immobilizers",
+  "immobilises": "immobilizes",
+  "immobilising": "immobilizing",
+  "immortalise": "immortalize",
+  "immortalised": "immortalized",
+  "immortalises": "immortalizes",
+  "immortalising": "immortalizing",
+  "immunisation": "immunization",
+  "immunise": "immunize",
+  "immunised": "immunized",
+  "immunises": "immunizes",
+  "immunising": "immunizing",
+  "impanelled": "impaneled",
+  "impanelling": "impaneling",
+  "imperilled": "imperiled",
+  "imperilling": "imperiling",
+  "individualise": "individualize",
+  "individualised": "individualized",
+  "individualises": "individualizes",
+  "individualising": "individualizing",
+  "industrialise": "industrialize",
+  "industrialised": "industrialized",
+  "industrialises": "industrializes",
+  "industrialising": "industrializing",
+  "inflexion": "inflection",
+  "inflexions": "inflections",
+  "initialise": "initialize",
+  "initialised": "initialized",
+  "initialises": "initializes",
+  "initialising": "initializing",
+  "initialled": "initialed",
+  "initialling": "initialing",
+  "instal": "install",
+  "instalment": "installment",
+  "instalments": "installments",
+  "instals": "installs",
+  "instil": "instill",
+  "instils": "instills",
+  "institutionalisation": "institutionalization",
+  "institutionalise": "institutionalize",
+  "institutionalised": "institutionalized",
+  "institutionalises": "institutionalizes",
+  "institutionalising": "institutionalizing",
+  "intellectualise": "intellectualize",
+  "intellectualised": "intellectualized",
+  "intellectualises": "intellectualizes",
+  "intellectualising": "intellectualizing",
+  "internalisation": "internalization",
+  "internalise": "internalize",
+  "internalised": "internalized",
+  "internalises": "internalizes",
+  "internalising": "internalizing",
+  "internationalisation": "internationalization",
+  "internationalise": "internationalize",
+  "internationalised": "internationalized",
+  "internationalises": "internationalizes",
+  "internationalising": "internationalizing",
+  "ionisation": "ionization",
+  "ionise": "ionize",
+  "ionised": "ionized",
+  "ioniser": "ionizer",
+  "ionisers": "ionizers",
+  "ionises": "ionizes",
+  "ionising": "ionizing",
+  "italicise": "italicize",
+  "italicised": "italicized",
+  "italicises": "italicizes",
+  "italicising": "italicizing",
+  "itemise": "itemize",
+  "itemised": "itemized",
+  "itemises": "itemizes",
+  "itemising": "itemizing",
+  "jeopardise": "jeopardize",
+  "jeopardised": "jeopardized",
+  "jeopardises": "jeopardizes",
+  "jeopardising": "jeopardizing",
+  "jewelled": "jeweled",
+  "jeweller": "jeweler",
+  "jewellers": "jewelers",
+  "jewellery": "jewelry",
+  "judgement": "judgment",
+  "kilogramme": "kilogram",
+  "kilogrammes": "kilograms",
+  "kilometre": "kilometer",
+  "kilometres": "kilometers",
+  "labelled": "labeled",
+  "labelling": "labeling",
+  "labour": "labor",
+  "laboured": "labored",
+  "labourer": "laborer",
+  "labourers": "laborers",
+  "labouring": "laboring",
+  "labours": "labors",
+  "lacklustre": "lackluster",
+  "legalisation": "legalization",
+  "legalise": "legalize",
+  "legalised": "legalized",
+  "legalises": "legalizes",
+  "legalising": "legalizing",
+  "legitimise": "legitimize",
+  "legitimised": "legitimized",
+  "legitimises": "legitimizes",
+  "legitimising": "legitimizing",
+  "leukaemia": "leukemia",
+  "levelled": "leveled",
+  "leveller": "leveler",
+  "levellers": "levelers",
+  "levelling": "leveling",
+  "libelled": "libeled",
+  "libelling": "libeling",
+  "libellous": "libelous",
+  "liberalisation": "liberalization",
+  "liberalise": "liberalize",
+  "liberalised": "liberalized",
+  "liberalises": "liberalizes",
+  "liberalising": "liberalizing",
+  "licence": "license",
+  "licenced": "licensed",
+  "licences": "licenses",
+  "licencing": "licensing",
+  "likeable": "likable",
+  "lionisation": "lionization",
+  "lionise": "lionize",
+  "lionised": "lionized",
+  "lionises": "lionizes",
+  "lionising": "lionizing",
+  "liquidise": "liquidize",
+  "liquidised": "liquidized",
+  "liquidiser": "liquidizer",
+  "liquidisers": "liquidizers",
+  "liquidises": "liquidizes",
+  "liquidising": "liquidizing",
+  "litre": "liter",
+  "litres": "liters",
+  "localise": "localize",
+  "localised": "localized",
+  "localises": "localizes",
+  "localising": "localizing",
+  "louvre": "louver",
+  "louvred": "louvered",
+  "louvres": "louvers",
+  "lustre": "luster",
+  "magnetise": "magnetize",
+  "magnetised": "magnetized",
+  "magnetises": "magnetizes",
+  "magnetising": "magnetizing",
+  "manoeuvrability": "maneuverability",
+  "manoeuvrable": "maneuverable",
+  "manoeuvre": "maneuver",
+  "manoeuvred": "maneuvered",
+  "manoeuvres": "maneuvers",
+  "manoeuvring": "maneuvering",
+  "manoeuvrings": "maneuverings",
+  "marginalisation": "marginalization",
+  "marginalise": "marginalize",
+  "marginalised": "marginalized",
+  "marginalises": "marginalizes",
+  "marginalising": "marginalizing",
+  "marshalled": "marshaled",
+  "marshalling": "marshaling",
+  "marvelled": "marveled",
+  "marvelling": "marveling",
+  "marvellous": "marvelous",
+  "marvellously": "marvelously",
+  "materialisation": "materialization",
+  "materialise": "materialize",
+  "materialised": "materialized",
+  "materialises": "materializes",
+  "materialising": "materializing",
+  "maximisation": "maximization",
+  "maximise": "maximize",
+  "maximised": "maximized",
+  "maximises": "maximizes",
+  "maximising": "maximizing",
+  "meagre": "meager",
+  "mechanisation": "mechanization",
+  "mechanise": "mechanize",
+  "mechanised": "mechanized",
+  "mechanises": "mechanizes",
+  "mechanising": "mechanizing",
+  "mediaeval": "medieval",
+  "memorialise": "memorialize",
+  "memorialised": "memorialized",
+  "memorialises": "memorializes",
+  "memorialising": "memorializing",
+  "memorise": "memorize",
+  "memorised": "memorized",
+  "memorises": "memorizes",
+  "memorising": "memorizing",
+  "mesmerise": "mesmerize",
+  "mesmerised": "mesmerized",
+  "mesmerises": "mesmerizes",
+  "mesmerising": "mesmerizing",
+  "metabolise": "metabolize",
+  "metabolised": "metabolized",
+  "metabolises": "metabolizes",
+  "metabolising": "metabolizing",
+  "metre": "meter",
+  "metres": "meters",
+  "mhm": "hmm",
+  "micrometre": "micrometer",
+  "micrometres": "micrometers",
+  "militarise": "militarize",
+  "militarised": "militarized",
+  "militarises": "militarizes",
+  "militarising": "militarizing",
+  "milligramme": "milligram",
+  "milligrammes": "milligrams",
+  "millilitre": "milliliter",
+  "millilitres": "milliliters",
+  "millimetre": "millimeter",
+  "millimetres": "millimeters",
+  "miniaturisation": "miniaturization",
+  "miniaturise": "miniaturize",
+  "miniaturised": "miniaturized",
+  "miniaturises": "miniaturizes",
+  "miniaturising": "miniaturizing",
+  "minibusses": "minibuses",
+  "minimise": "minimize",
+  "minimised": "minimized",
+  "minimises": "minimizes",
+  "minimising": "minimizing",
+  "misbehaviour": "misbehavior",
+  "misdemeanour": "misdemeanor",
+  "misdemeanours": "misdemeanors",
+  "misspelt": "misspelled",
+  "mitre": "miter",
+  "mitres": "miters",
+  "mm": "hmm",
+  "mmm": "hmm",
+  "mobilisation": "mobilization",
+  "mobilise": "mobilize",
+  "mobilised": "mobilized",
+  "mobilises": "mobilizes",
+  "mobilising": "mobilizing",
+  "modelled": "modeled",
+  "modeller": "modeler",
+  "modellers": "modelers",
+  "modelling": "modeling",
+  "modernise": "modernize",
+  "modernised": "modernized",
+  "modernises": "modernizes",
+  "modernising": "modernizing",
+  "moisturise": "moisturize",
+  "moisturised": "moisturized",
+  "moisturiser": "moisturizer",
+  "moisturisers": "moisturizers",
+  "moisturises": "moisturizes",
+  "moisturising": "moisturizing",
+  "monologue": "monolog",
+  "monologues": "monologs",
+  "monopolisation": "monopolization",
+  "monopolise": "monopolize",
+  "monopolised": "monopolized",
+  "monopolises": "monopolizes",
+  "monopolising": "monopolizing",
+  "moralise": "moralize",
+  "moralised": "moralized",
+  "moralises": "moralizes",
+  "moralising": "moralizing",
+  "motorised": "motorized",
+  "mould": "mold",
+  "moulded": "molded",
+  "moulder": "molder",
+  "mouldered": "moldered",
+  "mouldering": "moldering",
+  "moulders": "molders",
+  "mouldier": "moldier",
+  "mouldiest": "moldiest",
+  "moulding": "molding",
+  "mouldings": "moldings",
+  "moulds": "molds",
+  "mouldy": "moldy",
+  "moult": "molt",
+  "moulted": "molted",
+  "moulting": "molting",
+  "moults": "molts",
+  "moustache": "mustache",
+  "moustached": "mustached",
+  "moustaches": "mustaches",
+  "moustachioed": "mustachioed",
+  "multicoloured": "multicolored",
+  "nationalisation": "nationalization",
+  "nationalisations": "nationalizations",
+  "nationalise": "nationalize",
+  "nationalised": "nationalized",
+  "nationalises": "nationalizes",
+  "nationalising": "nationalizing",
+  "naturalisation": "naturalization",
+  "naturalise": "naturalize",
+  "naturalised": "naturalized",
+  "naturalises": "naturalizes",
+  "naturalising": "naturalizing",
+  "neighbour": "neighbor",
+  "neighbourhood": "neighborhood",
+  "neighbourhoods": "neighborhoods",
+  "neighbouring": "neighboring",
+  "neighbourliness": "neighborliness",
+  "neighbourly": "neighborly",
+  "neighbours": "neighbors",
+  "neutralisation": "neutralization",
+  "neutralise": "neutralize",
+  "neutralised": "neutralized",
+  "neutralises": "neutralizes",
+  "neutralising": "neutralizing",
+  "normalisation": "normalization",
+  "normalise": "normalize",
+  "normalised": "normalized",
+  "normalises": "normalizes",
+  "normalising": "normalizing",
+  "odour": "odor",
+  "odourless": "odorless",
+  "odours": "odors",
+  "oesophagus": "esophagus",
+  "oesophaguses": "esophaguses",
+  "oestrogen": "estrogen",
+  "offence": "offense",
+  "offences": "offenses",
+  "omelette": "omelet",
+  "omelettes": "omelets",
+  "optimise": "optimize",
+  "optimised": "optimized",
+  "optimises": "optimizes",
+  "optimising": "optimizing",
+  "organisation": "organization",
+  "organisational": "organizational",
+  "organisations": "organizations",
+  "organise": "organize",
+  "organised": "organized",
+  "organiser": "organizer",
+  "organisers": "organizers",
+  "organises": "organizes",
+  "organising": "organizing",
+  "orthopaedic": "orthopedic",
+  "orthopaedics": "orthopedics",
+  "ostracise": "ostracize",
+  "ostracised": "ostracized",
+  "ostracises": "ostracizes",
+  "ostracising": "ostracizing",
+  "outmanoeuvre": "outmaneuver",
+  "outmanoeuvred": "outmaneuvered",
+  "outmanoeuvres": "outmaneuvers",
+  "outmanoeuvring": "outmaneuvering",
+  "overemphasise": "overemphasize",
+  "overemphasised": "overemphasized",
+  "overemphasises": "overemphasizes",
+  "overemphasising": "overemphasizing",
+  "oxidisation": "oxidization",
+  "oxidise": "oxidize",
+  "oxidised": "oxidized",
+  "oxidises": "oxidizes",
+  "oxidising": "oxidizing",
+  "paederast": "pederast",
+  "paederasts": "pederasts",
+  "paediatric": "pediatric",
+  "paediatrician": "pediatrician",
+  "paediatricians": "pediatricians",
+  "paediatrics": "pediatrics",
+  "paedophile": "pedophile",
+  "paedophiles": "pedophiles",
+  "paedophilia": "pedophilia",
+  "palaeolithic": "paleolithic",
+  "palaeontologist": "paleontologist",
+  "palaeontologists": "paleontologists",
+  "palaeontology": "paleontology",
+  "panelled": "paneled",
+  "panelling": "paneling",
+  "panellist": "panelist",
+  "panellists": "panelists",
+  "paralyse": "paralyze",
+  "paralysed": "paralyzed",
+  "paralyses": "paralyzes",
+  "paralysing": "paralyzing",
+  "parcelled": "parceled",
+  "parcelling": "parceling",
+  "parlour": "parlor",
+  "parlours": "parlors",
+  "particularise": "particularize",
+  "particularised": "particularized",
+  "particularises": "particularizes",
+  "particularising": "particularizing",
+  "passivisation": "passivization",
+  "passivise": "passivize",
+  "passivised": "passivized",
+  "passivises": "passivizes",
+  "passivising": "passivizing",
+  "pasteurisation": "pasteurization",
+  "pasteurise": "pasteurize",
+  "pasteurised": "pasteurized",
+  "pasteurises": "pasteurizes",
+  "pasteurising": "pasteurizing",
+  "patronise": "patronize",
+  "patronised": "patronized",
+  "patronises": "patronizes",
+  "patronising": "patronizing",
+  "patronisingly": "patronizingly",
+  "pedalled": "pedaled",
+  "pedalling": "pedaling",
+  "pedestrianisation": "pedestrianization",
+  "pedestrianise": "pedestrianize",
+  "pedestrianised": "pedestrianized",
+  "pedestrianises": "pedestrianizes",
+  "pedestrianising": "pedestrianizing",
+  "penalise": "penalize",
+  "penalised": "penalized",
+  "penalises": "penalizes",
+  "penalising": "penalizing",
+  "pencilled": "penciled",
+  "pencilling": "penciling",
+  "personalise": "personalize",
+  "personalised": "personalized",
+  "personalises": "personalizes",
+  "personalising": "personalizing",
+  "pharmacopoeia": "pharmacopeia",
+  "pharmacopoeias": "pharmacopeias",
+  "philosophise": "philosophize",
+  "philosophised": "philosophized",
+  "philosophises": "philosophizes",
+  "philosophising": "philosophizing",
+  "philtre": "filter",
+  "philtres": "filters",
+  "phoney": "phony",
+  "plagiarise": "plagiarize",
+  "plagiarised": "plagiarized",
+  "plagiarises": "plagiarizes",
+  "plagiarising": "plagiarizing",
+  "plough": "plow",
+  "ploughed": "plowed",
+  "ploughing": "plowing",
+  "ploughman": "plowman",
+  "ploughmen": "plowmen",
+  "ploughs": "plows",
+  "ploughshare": "plowshare",
+  "ploughshares": "plowshares",
+  "polarisation": "polarization",
+  "polarise": "polarize",
+  "polarised": "polarized",
+  "polarises": "polarizes",
+  "polarising": "polarizing",
+  "politicisation": "politicization",
+  "politicise": "politicize",
+  "politicised": "politicized",
+  "politicises": "politicizes",
+  "politicising": "politicizing",
+  "popularisation": "popularization",
+  "popularise": "popularize",
+  "popularised": "popularized",
+  "popularises": "popularizes",
+  "popularising": "popularizing",
+  "pouffe": "pouf",
+  "pouffes": "poufs",
+  "practise": "practice",
+  "practised": "practiced",
+  "practises": "practices",
+  "practising": "practicing",
+  "praesidium": "presidium",
+  "praesidiums": "presidiums",
+  "pressurisation": "pressurization",
+  "pressurise": "pressurize",
+  "pressurised": "pressurized",
+  "pressurises": "pressurizes",
+  "pressurising": "pressurizing",
+  "pretence": "pretense",
+  "pretences": "pretenses",
+  "primaeval": "primeval",
+  "prioritisation": "prioritization",
+  "prioritise": "prioritize",
+  "prioritised": "prioritized",
+  "prioritises": "prioritizes",
+  "prioritising": "prioritizing",
+  "privatisation": "privatization",
+  "privatisations": "privatizations",
+  "privatise": "privatize",
+  "privatised": "privatized",
+  "privatises": "privatizes",
+  "privatising": "privatizing",
+  "professionalisation": "professionalization",
+  "professionalise": "professionalize",
+  "professionalised": "professionalized",
+  "professionalises": "professionalizes",
+  "professionalising": "professionalizing",
+  "programme": "program",
+  "programmes": "programs",
+  "prologue": "prolog",
+  "prologues": "prologs",
+  "propagandise": "propagandize",
+  "propagandised": "propagandized",
+  "propagandises": "propagandizes",
+  "propagandising": "propagandizing",
+  "proselytise": "proselytize",
+  "proselytised": "proselytized",
+  "proselytiser": "proselytizer",
+  "proselytisers": "proselytizers",
+  "proselytises": "proselytizes",
+  "proselytising": "proselytizing",
+  "psychoanalyse": "psychoanalyze",
+  "psychoanalysed": "psychoanalyzed",
+  "psychoanalyses": "psychoanalyzes",
+  "psychoanalysing": "psychoanalyzing",
+  "publicise": "publicize",
+  "publicised": "publicized",
+  "publicises": "publicizes",
+  "publicising": "publicizing",
+  "pulverisation": "pulverization",
+  "pulverise": "pulverize",
+  "pulverised": "pulverized",
+  "pulverises": "pulverizes",
+  "pulverising": "pulverizing",
+  "pummelled": "pummel",
+  "pummelling": "pummeled",
+  "pyjama": "pajama",
+  "pyjamas": "pajamas",
+  "pzazz": "pizzazz",
+  "quarrelled": "quarreled",
+  "quarrelling": "quarreling",
+  "radicalise": "radicalize",
+  "radicalised": "radicalized",
+  "radicalises": "radicalizes",
+  "radicalising": "radicalizing",
+  "rancour": "rancor",
+  "randomise": "randomize",
+  "randomised": "randomized",
+  "randomises": "randomizes",
+  "randomising": "randomizing",
+  "rationalisation": "rationalization",
+  "rationalisations": "rationalizations",
+  "rationalise": "rationalize",
+  "rationalised": "rationalized",
+  "rationalises": "rationalizes",
+  "rationalising": "rationalizing",
+  "ravelled": "raveled",
+  "ravelling": "raveling",
+  "realisable": "realizable",
+  "realisation": "realization",
+  "realisations": "realizations",
+  "realise": "realize",
+  "realised": "realized",
+  "realises": "realizes",
+  "realising": "realizing",
+  "recognisable": "recognizable",
+  "recognisably": "recognizably",
+  "recognisance": "recognizance",
+  "recognise": "recognize",
+  "recognised": "recognized",
+  "recognises": "recognizes",
+  "recognising": "recognizing",
+  "reconnoitre": "reconnoiter",
+  "reconnoitred": "reconnoitered",
+  "reconnoitres": "reconnoiters",
+  "reconnoitring": "reconnoitering",
+  "refuelled": "refueled",
+  "refuelling": "refueling",
+  "regularisation": "regularization",
+  "regularise": "regularize",
+  "regularised": "regularized",
+  "regularises": "regularizes",
+  "regularising": "regularizing",
+  "remodelled": "remodeled",
+  "remodelling": "remodeling",
+  "remould": "remold",
+  "remoulded": "remolded",
+  "remoulding": "remolding",
+  "remoulds": "remolds",
+  "reorganisation": "reorganization",
+  "reorganisations": "reorganizations",
+  "reorganise": "reorganize",
+  "reorganised": "reorganized",
+  "reorganises": "reorganizes",
+  "reorganising": "reorganizing",
+  "revelled": "reveled",
+  "reveller": "reveler",
+  "revellers": "revelers",
+  "revelling": "reveling",
+  "revitalise": "revitalize",
+  "revitalised": "revitalized",
+  "revitalises": "revitalizes",
+  "revitalising": "revitalizing",
+  "revolutionise": "revolutionize",
+  "revolutionised": "revolutionized",
+  "revolutionises": "revolutionizes",
+  "revolutionising": "revolutionizing",
+  "rhapsodise": "rhapsodize",
+  "rhapsodised": "rhapsodized",
+  "rhapsodises": "rhapsodizes",
+  "rhapsodising": "rhapsodizing",
+  "rigour": "rigor",
+  "rigours": "rigors",
+  "ritualised": "ritualized",
+  "rivalled": "rivaled",
+  "rivalling": "rivaling",
+  "romanticise": "romanticize",
+  "romanticised": "romanticized",
+  "romanticises": "romanticizes",
+  "romanticising": "romanticizing",
+  "rumour": "rumor",
+  "rumoured": "rumored",
+  "rumours": "rumors",
+  "sabre": "saber",
+  "sabres": "sabers",
+  "saltpetre": "saltpeter",
+  "sanitise": "sanitize",
+  "sanitised": "sanitized",
+  "sanitises": "sanitizes",
+  "sanitising": "sanitizing",
+  "satirise": "satirize",
+  "satirised": "satirized",
+  "satirises": "satirizes",
+  "satirising": "satirizing",
+  "saviour": "savior",
+  "saviours": "saviors",
+  "savour": "savor",
+  "savoured": "savored",
+  "savouries": "savories",
+  "savouring": "savoring",
+  "savours": "savors",
+  "savoury": "savory",
+  "scandalise": "scandalize",
+  "scandalised": "scandalized",
+  "scandalises": "scandalizes",
+  "scandalising": "scandalizing",
+  "sceptic": "skeptic",
+  "sceptical": "skeptical",
+  "sceptically": "skeptically",
+  "scepticism": "skepticism",
+  "sceptics": "skeptics",
+  "sceptre": "scepter",
+  "sceptres": "scepters",
+  "scrutinise": "scrutinize",
+  "scrutinised": "scrutinized",
+  "scrutinises": "scrutinizes",
+  "scrutinising": "scrutinizing",
+  "secularisation": "secularization",
+  "secularise": "secularize",
+  "secularised": "secularized",
+  "secularises": "secularizes",
+  "secularising": "secularizing",
+  "sensationalise": "sensationalize",
+  "sensationalised": "sensationalized",
+  "sensationalises": "sensationalizes",
+  "sensationalising": "sensationalizing",
+  "sensitise": "sensitize",
+  "sensitised": "sensitized",
+  "sensitises": "sensitizes",
+  "sensitising": "sensitizing",
+  "sentimentalise": "sentimentalize",
+  "sentimentalised": "sentimentalized",
+  "sentimentalises": "sentimentalizes",
+  "sentimentalising": "sentimentalizing",
+  "sepulchre": "sepulcher",
+  "sepulchres": "sepulchers",
+  "serialisation": "serialization",
+  "serialisations": "serializations",
+  "serialise": "serialize",
+  "serialised": "serialized",
+  "serialises": "serializes",
+  "serialising": "serializing",
+  "sermonise": "sermonize",
+  "sermonised": "sermonized",
+  "sermonises": "sermonizes",
+  "sermonising": "sermonizing",
+  "sheikh": "sheik",
+  "shovelled": "shoveled",
+  "shovelling": "shoveling",
+  "shrivelled": "shriveled",
+  "shrivelling": "shriveling",
+  "signalise": "signalize",
+  "signalised": "signalized",
+  "signalises": "signalizes",
+  "signalising": "signalizing",
+  "signalled": "signaled",
+  "signalling": "signaling",
+  "smoulder": "smolder",
+  "smouldered": "smoldered",
+  "smouldering": "smoldering",
+  "smoulders": "smolders",
+  "snivelled": "sniveled",
+  "snivelling": "sniveling",
+  "snorkelled": "snorkeled",
+  "snorkelling": "snorkeling",
+  "snowplough": "snowplow",
+  "snowploughs": "snowplow",
+  "socialisation": "socialization",
+  "socialise": "socialize",
+  "socialised": "socialized",
+  "socialises": "socializes",
+  "socialising": "socializing",
+  "sodomise": "sodomize",
+  "sodomised": "sodomized",
+  "sodomises": "sodomizes",
+  "sodomising": "sodomizing",
+  "solemnise": "solemnize",
+  "solemnised": "solemnized",
+  "solemnises": "solemnizes",
+  "solemnising": "solemnizing",
+  "sombre": "somber",
+  "specialisation": "specialization",
+  "specialisations": "specializations",
+  "specialise": "specialize",
+  "specialised": "specialized",
+  "specialises": "specializes",
+  "specialising": "specializing",
+  "spectre": "specter",
+  "spectres": "specters",
+  "spiralled": "spiraled",
+  "spiralling": "spiraling",
+  "splendour": "splendor",
+  "splendours": "splendors",
+  "squirrelled": "squirreled",
+  "squirrelling": "squirreling",
+  "stabilisation": "stabilization",
+  "stabilise": "stabilize",
+  "stabilised": "stabilized",
+  "stabiliser": "stabilizer",
+  "stabilisers": "stabilizers",
+  "stabilises": "stabilizes",
+  "stabilising": "stabilizing",
+  "standardisation": "standardization",
+  "standardise": "standardize",
+  "standardised": "standardized",
+  "standardises": "standardizes",
+  "standardising": "standardizing",
+  "stencilled": "stenciled",
+  "stencilling": "stenciling",
+  "sterilisation": "sterilization",
+  "sterilisations": "sterilizations",
+  "sterilise": "sterilize",
+  "sterilised": "sterilized",
+  "steriliser": "sterilizer",
+  "sterilisers": "sterilizers",
+  "sterilises": "sterilizes",
+  "sterilising": "sterilizing",
+  "stigmatisation": "stigmatization",
+  "stigmatise": "stigmatize",
+  "stigmatised": "stigmatized",
+  "stigmatises": "stigmatizes",
+  "stigmatising": "stigmatizing",
+  "storey": "story",
+  "storeys": "stories",
+  "subsidisation": "subsidization",
+  "subsidise": "subsidize",
+  "subsidised": "subsidized",
+  "subsidiser": "subsidizer",
+  "subsidisers": "subsidizers",
+  "subsidises": "subsidizes",
+  "subsidising": "subsidizing",
+  "succour": "succor",
+  "succoured": "succored",
+  "succouring": "succoring",
+  "succours": "succors",
+  "sulphate": "sulfate",
+  "sulphates": "sulfates",
+  "sulphide": "sulfide",
+  "sulphides": "sulfides",
+  "sulphur": "sulfur",
+  "sulphurous": "sulfurous",
+  "summarise": "summarize",
+  "summarised": "summarized",
+  "summarises": "summarizes",
+  "summarising": "summarizing",
+  "swivelled": "swiveled",
+  "swivelling": "swiveling",
+  "symbolise": "symbolize",
+  "symbolised": "symbolized",
+  "symbolises": "symbolizes",
+  "symbolising": "symbolizing",
+  "sympathise": "sympathize",
+  "sympathised": "sympathized",
+  "sympathiser": "sympathizer",
+  "sympathisers": "sympathizers",
+  "sympathises": "sympathizes",
+  "sympathising": "sympathizing",
+  "synchronisation": "synchronization",
+  "synchronise": "synchronize",
+  "synchronised": "synchronized",
+  "synchronises": "synchronizes",
+  "synchronising": "synchronizing",
+  "synthesise": "synthesize",
+  "synthesised": "synthesized",
+  "synthesiser": "synthesizer",
+  "synthesisers": "synthesizers",
+  "synthesises": "synthesizes",
+  "synthesising": "synthesizing",
+  "syphon": "siphon",
+  "syphoned": "siphoned",
+  "syphoning": "siphoning",
+  "syphons": "siphons",
+  "systematisation": "systematization",
+  "systematise": "systematize",
+  "systematised": "systematized",
+  "systematises": "systematizes",
+  "systematising": "systematizing",
+  "tantalise": "tantalize",
+  "tantalised": "tantalized",
+  "tantalises": "tantalizes",
+  "tantalising": "tantalizing",
+  "tantalisingly": "tantalizingly",
+  "tasselled": "tasseled",
+  "technicolour": "technicolor",
+  "temporise": "temporize",
+  "temporised": "temporized",
+  "temporises": "temporizes",
+  "temporising": "temporizing",
+  "tenderise": "tenderize",
+  "tenderised": "tenderized",
+  "tenderises": "tenderizes",
+  "tenderising": "tenderizing",
+  "terrorise": "terrorize",
+  "terrorised": "terrorized",
+  "terrorises": "terrorizes",
+  "terrorising": "terrorizing",
+  "theatre": "theater",
+  "theatregoer": "theatergoer",
+  "theatregoers": "theatergoers",
+  "theatres": "theaters",
+  "theorise": "theorize",
+  "theorised": "theorized",
+  "theorises": "theorizes",
+  "theorising": "theorizing",
+  "tonne": "ton",
+  "tonnes": "tons",
+  "towelled": "toweled",
+  "towelling": "toweling",
+  "toxaemia": "toxemia",
+  "tranquillise": "tranquilize",
+  "tranquillised": "tranquilized",
+  "tranquilliser": "tranquilizer",
+  "tranquillisers": "tranquilizers",
+  "tranquillises": "tranquilizes",
+  "tranquillising": "tranquilizing",
+  "tranquillity": "tranquility",
+  "tranquillize": "tranquilize",
+  "tranquillized": "tranquilized",
+  "tranquillizer": "tranquilizer",
+  "tranquillizers": "tranquilizers",
+  "tranquillizes": "tranquilizes",
+  "tranquillizing": "tranquilizing",
+  "tranquilly": "tranquility",
+  "transistorised": "transistorized",
+  "traumatise": "traumatize",
+  "traumatised": "traumatized",
+  "traumatises": "traumatizes",
+  "traumatising": "traumatizing",
+  "travelled": "traveled",
+  "traveller": "traveler",
+  "travellers": "travelers",
+  "travelling": "traveling",
+  "travelog": "travelogue",
+  "travelogs": "travelogues",
+  "trialled": "trialed",
+  "trialling": "trialing",
+  "tricolour": "tricolor",
+  "tricolours": "tricolors",
+  "trivialise": "trivialize",
+  "trivialised": "trivialized",
+  "trivialises": "trivializes",
+  "trivialising": "trivializing",
+  "tumour": "tumor",
+  "tumours": "tumors",
+  "tunnelled": "tunneled",
+  "tunnelling": "tunneling",
+  "tyrannise": "tyrannize",
+  "tyrannised": "tyrannized",
+  "tyrannises": "tyrannizes",
+  "tyrannising": "tyrannizing",
+  "tyre": "tire",
+  "tyres": "tires",
+  "unauthorised": "unauthorized",
+  "uncivilised": "uncivilized",
+  "underutilised": "underutilized",
+  "unequalled": "unequaled",
+  "unfavourable": "unfavorable",
+  "unfavourably": "unfavorably",
+  "unionisation": "unionization",
+  "unionise": "unionize",
+  "unionised": "unionized",
+  "unionises": "unionizes",
+  "unionising": "unionizing",
+  "unorganised": "unorganized",
+  "unravelled": "unraveled",
+  "unravelling": "unraveling",
+  "unrecognisable": "unrecognizable",
+  "unrecognised": "unrecognized",
+  "unrivalled": "unrivaled",
+  "unsavoury": "unsavory",
+  "untrammelled": "untrammeled",
+  "urbanisation": "urbanization",
+  "urbanise": "urbanize",
+  "urbanised": "urbanized",
+  "urbanises": "urbanizes",
+  "urbanising": "urbanizing",
+  "utilisable": "utilizable",
+  "utilisation": "utilization",
+  "utilise": "utilize",
+  "utilised": "utilized",
+  "utilises": "utilizes",
+  "utilising": "utilizing",
+  "valour": "valor",
+  "vandalise": "vandalize",
+  "vandalised": "vandalized",
+  "vandalises": "vandalizes",
+  "vandalising": "vandalizing",
+  "vaporisation": "vaporization",
+  "vaporise": "vaporize",
+  "vaporised": "vaporized",
+  "vaporises": "vaporizes",
+  "vaporising": "vaporizing",
+  "vapour": "vapor",
+  "vapours": "vapors",
+  "verbalise": "verbalize",
+  "verbalised": "verbalized",
+  "verbalises": "verbalizes",
+  "verbalising": "verbalizing",
+  "victimisation": "victimization",
+  "victimise": "victimize",
+  "victimised": "victimized",
+  "victimises": "victimizes",
+  "victimising": "victimizing",
+  "videodisc": "videodisk",
+  "videodiscs": "videodisks",
+  "vigour": "vigor",
+  "visualisation": "visualization",
+  "visualisations": "visualizations",
+  "visualise": "visualize",
+  "visualised": "visualized",
+  "visualises": "visualizes",
+  "visualising": "visualizing",
+  "vocalisation": "vocalization",
+  "vocalisations": "vocalizations",
+  "vocalise": "vocalize",
+  "vocalised": "vocalized",
+  "vocalises": "vocalizes",
+  "vocalising": "vocalizing",
+  "vulcanised": "vulcanized",
+  "vulgarisation": "vulgarization",
+  "vulgarise": "vulgarize",
+  "vulgarised": "vulgarized",
+  "vulgarises": "vulgarizes",
+  "vulgarising": "vulgarizing",
+  "waggon": "wagon",
+  "waggons": "wagons",
+  "watercolour": "watercolor",
+  "watercolours": "watercolors",
+  "weaselled": "weaseled",
+  "weaselling": "weaseling",
+  "westernisation": "westernization",
+  "westernise": "westernize",
+  "westernised": "westernized",
+  "westernises": "westernizes",
+  "westernising": "westernizing",
+  "womanise": "womanize",
+  "womanised": "womanized",
+  "womaniser": "womanizer",
+  "womanisers": "womanizers",
+  "womanises": "womanizes",
+  "womanising": "womanizing",
+  "woollen": "woolen",
+  "woollens": "woolens",
+  "woollies": "woolies",
+  "woolly": "wooly",
+  "worshipped": "worshiped",
+  "worshipper": "worshiper",
+  "worshipping": "worshiping",
+  "yodelled": "yodeled",
+  "yodelling": "yodeling",
+  "yoghourt": "yogurt",
+  "yoghourts": "yogurts",
+  "yoghurt": "yogurt",
+  "yoghurts": "yogurts"
+}

+ 6 - 0
benchmark/requirements.benchmark.txt

@@ -0,0 +1,6 @@
+transformers
+jiwer
+datasets
+memory_profiler
+py3nvml
+pytubefix

+ 31 - 0
benchmark/speed_benchmark.py

@@ -0,0 +1,31 @@
+import argparse
+import timeit
+
+from typing import Callable
+
+from utils import inference
+
+parser = argparse.ArgumentParser(description="Speed benchmark")
+parser.add_argument(
+    "--repeat",
+    type=int,
+    default=3,
+    help="Times an experiment will be run.",
+)
+args = parser.parse_args()
+
+
+def measure_speed(func: Callable[[], None]):
+    # as written in https://docs.python.org/3/library/timeit.html#timeit.Timer.repeat,
+    # min should be taken rather than the average
+    runtimes = timeit.repeat(
+        func,
+        repeat=args.repeat,
+        number=10,
+    )
+    print(runtimes)
+    print("Min execution time: %.3fs" % (min(runtimes) / 10.0))
+
+
+if __name__ == "__main__":
+    measure_speed(inference)

+ 39 - 0
benchmark/utils.py

@@ -0,0 +1,39 @@
+import logging
+
+from threading import Thread
+from typing import Optional
+
+from faster_whisper import WhisperModel
+
+model_path = "large-v3"
+model = WhisperModel(model_path, device="cuda")
+
+
+def inference():
+    segments, info = model.transcribe("benchmark.m4a", language="fr")
+    for segment in segments:
+        print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
+
+
+def get_logger(name: Optional[str] = None) -> logging.Logger:
+    formatter = logging.Formatter("%(levelname)s: %(message)s")
+    logger = logging.getLogger(name)
+    logger.setLevel(logging.DEBUG)
+    handler = logging.StreamHandler()
+    handler.setFormatter(formatter)
+    logger.addHandler(handler)
+    return logger
+
+
+class MyThread(Thread):
+    def __init__(self, func, params):
+        super(MyThread, self).__init__()
+        self.func = func
+        self.params = params
+        self.result = None
+
+    def run(self):
+        self.result = self.func(*self.params)
+
+    def get_result(self):
+        return self.result

+ 59 - 0
benchmark/wer_benchmark.py

@@ -0,0 +1,59 @@
+import argparse
+import json
+import os
+
+from datasets import load_dataset
+from jiwer import wer
+from tqdm import tqdm
+from transformers.models.whisper.english_normalizer import EnglishTextNormalizer
+
+from faster_whisper import WhisperModel
+
+parser = argparse.ArgumentParser(description="WER benchmark")
+parser.add_argument(
+    "--audio_numb",
+    type=int,
+    default=None,
+    help="Specify the number of validation audio files in the dataset."
+    " Set to None to retrieve all audio files.",
+)
+args = parser.parse_args()
+
+model_path = "large-v3"
+model = WhisperModel(model_path, device="cuda")
+
+# load the dataset with streaming mode
+dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
+
+with open(os.path.join(os.path.dirname(__file__), "normalizer.json"), "r") as f:
+    normalizer = EnglishTextNormalizer(json.load(f))
+
+
+def inference(batch):
+    batch["transcription"] = []
+    for sample in batch["audio"]:
+        segments, info = model.transcribe(sample["array"], language="en")
+        batch["transcription"].append("".join([segment.text for segment in segments]))
+    batch["reference"] = batch["text"]
+    return batch
+
+
+dataset = dataset.map(function=inference, batched=True, batch_size=16)
+
+all_transcriptions = []
+all_references = []
+
+# iterate over the dataset and run inference
+for i, result in tqdm(enumerate(dataset), desc="Evaluating..."):
+    all_transcriptions.append(result["transcription"])
+    all_references.append(result["reference"])
+    if args.audio_numb and i == (args.audio_numb - 1):
+        break
+
+# normalize predictions and references
+all_transcriptions = [normalizer(transcription) for transcription in all_transcriptions]
+all_references = [normalizer(reference) for reference in all_references]
+
+# compute the WER metric
+word_error_rate = 100 * wer(hypothesis=all_transcriptions, reference=all_references)
+print("WER: %.3f" % word_error_rate)

+ 6 - 0
docker/Dockerfile

@@ -0,0 +1,6 @@
+FROM nvidia/cuda:12.3.2-cudnn9-runtime-ubuntu22.04
+WORKDIR /root
+RUN apt-get update -y && apt-get install -y python3-pip
+COPY infer.py jfk.flac ./
+RUN pip3 install faster-whisper
+CMD ["python3", "infer.py"]

+ 7 - 0
docker/infer.py

@@ -0,0 +1,7 @@
+from faster_whisper import WhisperModel
+
+jfk_path = "jfk.flac"
+model = WhisperModel("tiny", device="cuda")
+segments, info = model.transcribe(jfk_path, word_timestamps=True)
+for segment in segments:
+    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

BIN
docker/jfk.flac


+ 14 - 0
faster_whisper/__init__.py

@@ -0,0 +1,14 @@
+from faster_whisper.audio import decode_audio
+from faster_whisper.transcribe import BatchedInferencePipeline, WhisperModel
+from faster_whisper.utils import available_models, download_model, format_timestamp
+from faster_whisper.version import __version__
+
+__all__ = [
+    "available_models",
+    "decode_audio",
+    "WhisperModel",
+    "BatchedInferencePipeline",
+    "download_model",
+    "format_timestamp",
+    "__version__",
+]

+ 0 - 0
faster_whisper/assets/__init__.py


BIN
faster_whisper/assets/silero_decoder_v5.onnx


BIN
faster_whisper/assets/silero_encoder_v5.onnx


+ 123 - 0
faster_whisper/audio.py

@@ -0,0 +1,123 @@
+"""We use the PyAV library to decode the audio: https://github.com/PyAV-Org/PyAV
+
+The advantage of PyAV is that it bundles the FFmpeg libraries so there is no additional
+system dependencies. FFmpeg does not need to be installed on the system.
+
+However, the API is quite low-level so we need to manipulate audio frames directly.
+"""
+
+import gc
+import io
+import itertools
+
+from typing import BinaryIO, Union
+
+import av
+import numpy as np
+
+
+def decode_audio(
+    input_file: Union[str, BinaryIO],
+    sampling_rate: int = 16000,
+    split_stereo: bool = False,
+):
+    """Decodes the audio.
+
+    Args:
+      input_file: Path to the input file or a file-like object.
+      sampling_rate: Resample the audio to this sample rate.
+      split_stereo: Return separate left and right channels.
+
+    Returns:
+      A float32 Numpy array.
+
+      If `split_stereo` is enabled, the function returns a 2-tuple with the
+      separated left and right channels.
+    """
+    resampler = av.audio.resampler.AudioResampler(
+        format="s16",
+        layout="mono" if not split_stereo else "stereo",
+        rate=sampling_rate,
+    )
+
+    raw_buffer = io.BytesIO()
+    dtype = None
+
+    with av.open(input_file, mode="r", metadata_errors="ignore") as container:
+        frames = container.decode(audio=0)
+        frames = _ignore_invalid_frames(frames)
+        frames = _group_frames(frames, 500000)
+        frames = _resample_frames(frames, resampler)
+
+        for frame in frames:
+            array = frame.to_ndarray()
+            dtype = array.dtype
+            raw_buffer.write(array)
+
+    # It appears that some objects related to the resampler are not freed
+    # unless the garbage collector is manually run.
+    # https://github.com/SYSTRAN/faster-whisper/issues/390
+    # note that this slows down loading the audio a little bit
+    # if that is a concern, please use ffmpeg directly as in here:
+    # https://github.com/openai/whisper/blob/25639fc/whisper/audio.py#L25-L62
+    del resampler
+    gc.collect()
+
+    audio = np.frombuffer(raw_buffer.getbuffer(), dtype=dtype)
+
+    # Convert s16 back to f32.
+    audio = audio.astype(np.float32) / 32768.0
+
+    if split_stereo:
+        left_channel = audio[0::2]
+        right_channel = audio[1::2]
+        return left_channel, right_channel
+
+    return audio
+
+
+def _ignore_invalid_frames(frames):
+    iterator = iter(frames)
+
+    while True:
+        try:
+            yield next(iterator)
+        except StopIteration:
+            break
+        except av.error.InvalidDataError:
+            continue
+
+
+def _group_frames(frames, num_samples=None):
+    fifo = av.audio.fifo.AudioFifo()
+
+    for frame in frames:
+        frame.pts = None  # Ignore timestamp check.
+        fifo.write(frame)
+
+        if num_samples is not None and fifo.samples >= num_samples:
+            yield fifo.read()
+
+    if fifo.samples > 0:
+        yield fifo.read()
+
+
+def _resample_frames(frames, resampler):
+    # Add None to flush the resampler.
+    for frame in itertools.chain(frames, [None]):
+        yield from resampler.resample(frame)
+
+
+def pad_or_trim(array, length: int = 3000, *, axis: int = -1):
+    """
+    Pad or trim the Mel features array to 3000, as expected by the encoder.
+    """
+    if array.shape[axis] > length:
+        array = array.take(indices=range(length), axis=axis)
+
+    if array.shape[axis] < length:
+        pad_widths = [(0, 0)] * array.ndim
+        pad_widths[axis] = (0, length - array.shape[axis])
+        array = np.pad(array, pad_widths)
+
+    return array

+ 230 - 0
faster_whisper/feature_extractor.py

@@ -0,0 +1,230 @@
+import numpy as np
+
+
+class FeatureExtractor:
+    def __init__(
+        self,
+        feature_size=80,
+        sampling_rate=16000,
+        hop_length=160,
+        chunk_length=30,
+        n_fft=400,
+    ):
+        self.n_fft = n_fft
+        self.hop_length = hop_length
+        self.chunk_length = chunk_length
+        self.n_samples = chunk_length * sampling_rate
+        self.nb_max_frames = self.n_samples // hop_length
+        self.time_per_frame = hop_length / sampling_rate
+        self.sampling_rate = sampling_rate
+        self.mel_filters = self.get_mel_filters(
+            sampling_rate, n_fft, n_mels=feature_size
+        ).astype("float32")
+
+    @staticmethod
+    def get_mel_filters(sr, n_fft, n_mels=128):
+        # Initialize the weights
+        n_mels = int(n_mels)
+
+        # Center freqs of each FFT bin
+        fftfreqs = np.fft.rfftfreq(n=n_fft, d=1.0 / sr)
+
+        # 'Center freqs' of mel bands - uniformly spaced between limits
+        min_mel = 0.0
+        max_mel = 45.245640471924965
+
+        mels = np.linspace(min_mel, max_mel, n_mels + 2)
+
+        # Fill in the linear scale
+        f_min = 0.0
+        f_sp = 200.0 / 3
+        freqs = f_min + f_sp * mels
+
+        # And now the nonlinear scale
+        min_log_hz = 1000.0  # beginning of log region (Hz)
+        min_log_mel = (min_log_hz - f_min) / f_sp  # same (Mels)
+        logstep = np.log(6.4) / 27.0  # step size for log region
+
+        # If we have vector data, vectorize
+        log_t = mels >= min_log_mel
+        freqs[log_t] = min_log_hz * np.exp(logstep * (mels[log_t] - min_log_mel))
+
+        fdiff = np.diff(freqs)
+        ramps = freqs.reshape(-1, 1) - fftfreqs.reshape(1, -1)
+
+        lower = -ramps[:-2] / np.expand_dims(fdiff[:-1], axis=1)
+        upper = ramps[2:] / np.expand_dims(fdiff[1:], axis=1)
+
+        # Intersect them with each other and zero, vectorized across all i
+        weights = np.maximum(np.zeros_like(lower), np.minimum(lower, upper))
+
+        # Slaney-style mel is scaled to be approx constant energy per channel
+        enorm = 2.0 / (freqs[2 : n_mels + 2] - freqs[:n_mels])
+        weights *= np.expand_dims(enorm, axis=1)
+
+        return weights
+
+    @staticmethod
+    def stft(
+        input_array: np.ndarray,
+        n_fft: int,
+        hop_length: int = None,
+        win_length: int = None,
+        window: np.ndarray = None,
+        center: bool = True,
+        mode: str = "reflect",
+        normalized: bool = False,
+        onesided: bool = None,
+        return_complex: bool = None,
+    ):
+        # Default initialization for hop_length and win_length
+        hop_length = hop_length if hop_length is not None else n_fft // 4
+        win_length = win_length if win_length is not None else n_fft
+        input_is_complex = np.iscomplexobj(input_array)
+
+        # Determine if the output should be complex
+        return_complex = (
+            return_complex
+            if return_complex is not None
+            else (input_is_complex or (window is not None and np.iscomplexobj(window)))
+        )
+
+        if not return_complex and return_complex is None:
+            raise ValueError(
+                "stft requires the return_complex parameter for real inputs."
+            )
+
+        # Input checks
+        if not np.issubdtype(input_array.dtype, np.floating) and not input_is_complex:
+            raise ValueError(
+                "stft: expected an array of floating point or complex values,"
+                f" got {input_array.dtype}"
+            )
+
+        if input_array.ndim > 2 or input_array.ndim < 1:
+            raise ValueError(
+                f"stft: expected a 1D or 2D array, but got {input_array.ndim}D array"
+            )
+
+        # Handle 1D input
+        if input_array.ndim == 1:
+            input_array = np.expand_dims(input_array, axis=0)
+            input_array_1d = True
+        else:
+            input_array_1d = False
+
+        # Center padding if required
+        if center:
+            pad_amount = n_fft // 2
+            input_array = np.pad(
+                input_array, ((0, 0), (pad_amount, pad_amount)), mode=mode
+            )
+
+        batch, length = input_array.shape
+
+        # Additional input checks
+        if n_fft <= 0 or n_fft > length:
+            raise ValueError(
+                f"stft: expected 0 < n_fft <= {length}, but got n_fft={n_fft}"
+            )
+
+        if hop_length <= 0:
+            raise ValueError(
+                f"stft: expected hop_length > 0, but got hop_length={hop_length}"
+            )
+
+        if win_length <= 0 or win_length > n_fft:
+            raise ValueError(
+                f"stft: expected 0 < win_length <= n_fft, but got win_length={win_length}"
+            )
+
+        if window is not None:
+            if window.ndim != 1 or window.shape[0] != win_length:
+                raise ValueError(
+                    f"stft: expected a 1D window array of size equal to win_length={win_length}, "
+                    f"but got window with size {window.shape}"
+                )
+
+        # Handle padding of the window if necessary
+        if win_length < n_fft:
+            left = (n_fft - win_length) // 2
+            window_ = np.zeros(n_fft, dtype=window.dtype)
+            window_[left : left + win_length] = window
+        else:
+            window_ = window
+
+        # Calculate the number of frames
+        n_frames = 1 + (length - n_fft) // hop_length
+
+        # Time to columns
+        input_array = np.lib.stride_tricks.as_strided(
+            input_array,
+            (batch, n_frames, n_fft),
+            (
+                input_array.strides[0],
+                hop_length * input_array.strides[1],
+                input_array.strides[1],
+            ),
+        )
+
+        if window_ is not None:
+            input_array = input_array * window_
+
+        # FFT and transpose
+        complex_fft = input_is_complex
+        onesided = onesided if onesided is not None else not complex_fft
+
+        if normalized:
+            norm = "ortho"
+        else:
+            norm = None
+
+        if complex_fft:
+            if onesided:
+                raise ValueError(
+                    "Cannot have onesided output if window or input is complex"
+                )
+            output = np.fft.fft(input_array, n=n_fft, axis=-1, norm=norm)
+        else:
+            output = np.fft.rfft(input_array, n=n_fft, axis=-1, norm=norm)
+
+        output = output.transpose((0, 2, 1))
+
+        if input_array_1d:
+            output = output.squeeze(0)
+
+        return output if return_complex else np.real(output)
+
+    def __call__(self, waveform: np.ndarray, padding=160, chunk_length=None):
+        """
+        Compute the log-Mel spectrogram of the provided audio.
+        """
+
+        if chunk_length is not None:
+            self.n_samples = chunk_length * self.sampling_rate
+            self.nb_max_frames = self.n_samples // self.hop_length
+
+        if waveform.dtype is not np.float32:
+            waveform = waveform.astype(np.float32)
+
+        if padding:
+            waveform = np.pad(waveform, (0, padding))
+
+        window = np.hanning(self.n_fft + 1)[:-1].astype("float32")
+
+        stft = self.stft(
+            waveform,
+            self.n_fft,
+            self.hop_length,
+            window=window,
+            return_complex=True,
+        ).astype("complex64")
+        magnitudes = np.abs(stft[..., :-1]) ** 2
+
+        mel_spec = self.mel_filters @ magnitudes
+
+        log_spec = np.log10(np.clip(mel_spec, a_min=1e-10, a_max=None))
+        log_spec = np.maximum(log_spec, log_spec.max() - 8.0)
+        log_spec = (log_spec + 4.0) / 4.0
+
+        return log_spec

+ 314 - 0
faster_whisper/tokenizer.py

@@ -0,0 +1,314 @@
+import string
+
+from functools import cached_property
+from typing import List, Optional, Tuple
+
+import tokenizers
+
+
+class Tokenizer:
+    """Simple wrapper around a tokenizers.Tokenizer."""
+
+    def __init__(
+        self,
+        tokenizer: tokenizers.Tokenizer,
+        multilingual: bool,
+        task: Optional[str] = None,
+        language: Optional[str] = None,
+    ):
+        self.tokenizer = tokenizer
+
+        if multilingual:
+            if task not in _TASKS:
+                raise ValueError(
+                    "'%s' is not a valid task (accepted tasks: %s)"
+                    % (task, ", ".join(_TASKS))
+                )
+
+            if language not in _LANGUAGE_CODES:
+                raise ValueError(
+                    "'%s' is not a valid language code (accepted language codes: %s)"
+                    % (language, ", ".join(_LANGUAGE_CODES))
+                )
+
+            self.task = self.tokenizer.token_to_id("<|%s|>" % task)
+            self.language = self.tokenizer.token_to_id("<|%s|>" % language)
+            self.language_code = language
+        else:
+            self.task = None
+            self.language = None
+            self.language_code = "en"
+
+    @cached_property
+    def transcribe(self) -> int:
+        return self.tokenizer.token_to_id("<|transcribe|>")
+
+    @cached_property
+    def translate(self) -> int:
+        return self.tokenizer.token_to_id("<|translate|>")
+
+    @cached_property
+    def sot(self) -> int:
+        return self.tokenizer.token_to_id("<|startoftranscript|>")
+
+    @cached_property
+    def sot_lm(self) -> int:
+        return self.tokenizer.token_to_id("<|startoflm|>")
+
+    @cached_property
+    def sot_prev(self) -> int:
+        return self.tokenizer.token_to_id("<|startofprev|>")
+
+    @cached_property
+    def eot(self) -> int:
+        return self.tokenizer.token_to_id("<|endoftext|>")
+
+    @cached_property
+    def no_timestamps(self) -> int:
+        return self.tokenizer.token_to_id("<|notimestamps|>")
+
+    @property
+    def timestamp_begin(self) -> int:
+        return self.no_timestamps + 1
+
+    @property
+    def sot_sequence(self) -> List[int]:
+        sequence = [self.sot]
+
+        if self.language is not None:
+            sequence.append(self.language)
+
+        if self.task is not None:
+            sequence.append(self.task)
+
+        return sequence
+
+    def encode(self, text: str) -> List[int]:
+        return self.tokenizer.encode(text, add_special_tokens=False).ids
+
+    def decode(self, tokens: List[int]) -> str:
+        text_tokens = [token for token in tokens if token < self.eot]
+        return self.tokenizer.decode(text_tokens)
+
+    def decode_with_timestamps(self, tokens: List[int]) -> str:
+        outputs = [[]]
+
+        for token in tokens:
+            if token >= self.timestamp_begin:
+                timestamp = f"<|{(token - self.timestamp_begin) * 0.02:.2f}|>"
+                outputs.append(timestamp)
+                outputs.append([])
+            else:
+                outputs[-1].append(token)
+
+        return "".join(
+            [s if isinstance(s, str) else self.tokenizer.decode(s) for s in outputs]
+        )
+
+    @cached_property
+    def non_speech_tokens(self) -> Tuple[int]:
+        """
+        Returns the list of tokens to suppress in order to avoid any speaker tags or non-speech
+        annotations, to prevent sampling texts that are not actually spoken in the audio, e.g.
+
+        - ♪♪♪
+        - ( SPEAKING FOREIGN LANGUAGE )
+        - [DAVID] Hey there,
+
+        keeping basic punctuations like commas, periods, question marks, exclamation points, etc.
+        """
+        symbols = list('"#()*+/:;<=>@[\\]^_`{|}~「」『』')
+        symbols += (
+            "<< >> <<< >>> -- --- -( -[ (' (\" (( )) ((( ))) [[ ]] {{ }} ♪♪ ♪♪♪".split()
+        )
+
+        # symbols that may be a single token or multiple tokens depending on the tokenizer.
+        # In case they're multiple tokens, suppress the first token, which is safe because:
+        # These are between U+2640 and U+267F miscellaneous symbols that are okay to suppress
+        # in generations, and in the 3-byte UTF-8 representation they share the first two bytes.
+        miscellaneous = set("♩♪♫♬♭♮♯")
+        assert all(0x2640 <= ord(c) <= 0x267F for c in miscellaneous)
+
+        # allow hyphens "-" and single quotes "'" between words, but not at the beginning of a word
+        result = {self.encode(" -")[0], self.encode(" '")[0]}
+        for symbol in symbols + list(miscellaneous):
+            for tokens in [
+                self.encode(symbol),
+                self.encode(" " + symbol),
+            ]:
+                if len(tokens) == 1 or symbol in miscellaneous:
+                    result.add(tokens[0])
+
+        return tuple(sorted(result))
+
+    def split_to_word_tokens(
+        self, tokens: List[int]
+    ) -> Tuple[List[str], List[List[int]]]:
+        if self.language_code in {"zh", "ja", "th", "lo", "my", "yue"}:
+            # These languages don't typically use spaces, so it is difficult to split words
+            # without morpheme analysis. Here, we instead split words at any
+            # position where the tokens are decoded as valid unicode points
+            return self.split_tokens_on_unicode(tokens)
+
+        return self.split_tokens_on_spaces(tokens)
+
+    def split_tokens_on_unicode(
+        self, tokens: List[int]
+    ) -> Tuple[List[str], List[List[int]]]:
+        decoded_full = self.decode_with_timestamps(tokens)
+        replacement_char = "\ufffd"
+
+        words = []
+        word_tokens = []
+        current_tokens = []
+        unicode_offset = 0
+
+        for token in tokens:
+            current_tokens.append(token)
+            decoded = self.decode_with_timestamps(current_tokens)
+
+            try:
+                replacement_char_index = decoded.index(replacement_char)
+                replacement_char_index += unicode_offset
+            except ValueError:
+                replacement_char_index = None
+
+            if replacement_char_index is None or (
+                replacement_char_index < len(decoded_full)
+                and decoded_full[replacement_char_index] == replacement_char
+            ):
+                words.append(decoded)
+                word_tokens.append(current_tokens)
+                current_tokens = []
+                unicode_offset += len(decoded)
+
+        return words, word_tokens
+
+    def split_tokens_on_spaces(
+        self, tokens: List[int]
+    ) -> Tuple[List[str], List[List[int]]]:
+        subwords, subword_tokens_list = self.split_tokens_on_unicode(tokens)
+        words = []
+        word_tokens = []
+
+        for subword, subword_tokens in zip(subwords, subword_tokens_list):
+            special = subword_tokens[0] >= self.eot
+            with_space = subword.startswith(" ")
+            punctuation = subword.strip() in string.punctuation
+            if special or with_space or punctuation or len(words) == 0:
+                words.append(subword)
+                word_tokens.append(subword_tokens)
+            else:
+                words[-1] = words[-1] + subword
+                word_tokens[-1].extend(subword_tokens)
+
+        return words, word_tokens
+
+
+_TASKS = (
+    "transcribe",
+    "translate",
+)
+
+_LANGUAGE_CODES = (
+    "af",
+    "am",
+    "ar",
+    "as",
+    "az",
+    "ba",
+    "be",
+    "bg",
+    "bn",
+    "bo",
+    "br",
+    "bs",
+    "ca",
+    "cs",
+    "cy",
+    "da",
+    "de",
+    "el",
+    "en",
+    "es",
+    "et",
+    "eu",
+    "fa",
+    "fi",
+    "fo",
+    "fr",
+    "gl",
+    "gu",
+    "ha",
+    "haw",
+    "he",
+    "hi",
+    "hr",
+    "ht",
+    "hu",
+    "hy",
+    "id",
+    "is",
+    "it",
+    "ja",
+    "jw",
+    "ka",
+    "kk",
+    "km",
+    "kn",
+    "ko",
+    "la",
+    "lb",
+    "ln",
+    "lo",
+    "lt",
+    "lv",
+    "mg",
+    "mi",
+    "mk",
+    "ml",
+    "mn",
+    "mr",
+    "ms",
+    "mt",
+    "my",
+    "ne",
+    "nl",
+    "nn",
+    "no",
+    "oc",
+    "pa",
+    "pl",
+    "ps",
+    "pt",
+    "ro",
+    "ru",
+    "sa",
+    "sd",
+    "si",
+    "sk",
+    "sl",
+    "sn",
+    "so",
+    "sq",
+    "sr",
+    "su",
+    "sv",
+    "sw",
+    "ta",
+    "te",
+    "tg",
+    "th",
+    "tk",
+    "tl",
+    "tr",
+    "tt",
+    "uk",
+    "ur",
+    "uz",
+    "vi",
+    "yi",
+    "yo",
+    "zh",
+    "yue",
+)

+ 1903 - 0
faster_whisper/transcribe.py

@@ -0,0 +1,1903 @@
+import itertools
+import json
+import logging
+import os
+import zlib
+
+from dataclasses import asdict, dataclass
+from inspect import signature
+from math import ceil
+from typing import BinaryIO, Iterable, List, Optional, Tuple, Union
+from warnings import warn
+
+import ctranslate2
+import numpy as np
+import tokenizers
+
+from tqdm import tqdm
+
+from faster_whisper.audio import decode_audio, pad_or_trim
+from faster_whisper.feature_extractor import FeatureExtractor
+from faster_whisper.tokenizer import _LANGUAGE_CODES, Tokenizer
+from faster_whisper.utils import download_model, format_timestamp, get_end, get_logger
+from faster_whisper.vad import (
+    SpeechTimestampsMap,
+    VadOptions,
+    collect_chunks,
+    get_speech_timestamps,
+    merge_segments,
+)
+
+
+@dataclass
+class Word:
+    start: float
+    end: float
+    word: str
+    probability: float
+
+    def _asdict(self):
+        warn(
+            "Word._asdict() method is deprecated, use dataclasses.asdict(Word) instead",
+            DeprecationWarning,
+            2,
+        )
+        return asdict(self)
+
+
+@dataclass
+class Segment:
+    id: int
+    seek: int
+    start: float
+    end: float
+    text: str
+    tokens: List[int]
+    avg_logprob: float
+    compression_ratio: float
+    no_speech_prob: float
+    words: Optional[List[Word]]
+    temperature: Optional[float]
+
+    def _asdict(self):
+        warn(
+            "Segment._asdict() method is deprecated, use dataclasses.asdict(Segment) instead",
+            DeprecationWarning,
+            2,
+        )
+        return asdict(self)
+
+
+@dataclass
+class TranscriptionOptions:
+    beam_size: int
+    best_of: int
+    patience: float
+    length_penalty: float
+    repetition_penalty: float
+    no_repeat_ngram_size: int
+    log_prob_threshold: Optional[float]
+    no_speech_threshold: Optional[float]
+    compression_ratio_threshold: Optional[float]
+    condition_on_previous_text: bool
+    prompt_reset_on_temperature: float
+    temperatures: List[float]
+    initial_prompt: Optional[Union[str, Iterable[int]]]
+    prefix: Optional[str]
+    suppress_blank: bool
+    suppress_tokens: Optional[List[int]]
+    without_timestamps: bool
+    max_initial_timestamp: float
+    word_timestamps: bool
+    prepend_punctuations: str
+    append_punctuations: str
+    multilingual: bool
+    max_new_tokens: Optional[int]
+    clip_timestamps: Union[str, List[float]]
+    hallucination_silence_threshold: Optional[float]
+    hotwords: Optional[str]
+
+
+@dataclass
+class TranscriptionInfo:
+    language: str
+    language_probability: float
+    duration: float
+    duration_after_vad: float
+    all_language_probs: Optional[List[Tuple[str, float]]]
+    transcription_options: TranscriptionOptions
+    vad_options: VadOptions
+
+
+class BatchedInferencePipeline:
+    def __init__(
+        self,
+        model,
+    ):
+        self.model: WhisperModel = model
+        self.last_speech_timestamp = 0.0
+
+    def forward(self, features, tokenizer, chunks_metadata, options):
+        encoder_output, outputs = self.generate_segment_batched(
+            features, tokenizer, options
+        )
+
+        segmented_outputs = []
+        segment_sizes = []
+        for chunk_metadata, output in zip(chunks_metadata, outputs):
+            duration = chunk_metadata["end_time"] - chunk_metadata["start_time"]
+            segment_size = int(ceil(duration) * self.model.frames_per_second)
+            segment_sizes.append(segment_size)
+            (
+                subsegments,
+                seek,
+                single_timestamp_ending,
+            ) = self.model._split_segments_by_timestamps(
+                tokenizer=tokenizer,
+                tokens=output["tokens"],
+                time_offset=chunk_metadata["start_time"],
+                segment_size=segment_size,
+                segment_duration=duration,
+                seek=0,
+            )
+            segmented_outputs.append(
+                [
+                    dict(
+                        text=tokenizer.decode(subsegment["tokens"]),
+                        avg_logprob=output["avg_logprob"],
+                        no_speech_prob=output["no_speech_prob"],
+                        tokens=subsegment["tokens"],
+                        start=subsegment["start"],
+                        end=subsegment["end"],
+                        compression_ratio=get_compression_ratio(
+                            tokenizer.decode(subsegment["tokens"])
+                        ),
+                        seek=int(
+                            chunk_metadata["start_time"] * self.model.frames_per_second
+                        ),
+                    )
+                    for subsegment in subsegments
+                ]
+            )
+        if options.word_timestamps:
+            self.last_speech_timestamp = self.model.add_word_timestamps(
+                segmented_outputs,
+                tokenizer,
+                encoder_output,
+                segment_sizes,
+                options.prepend_punctuations,
+                options.append_punctuations,
+                self.last_speech_timestamp,
+            )
+
+        return segmented_outputs
+
+    def generate_segment_batched(
+        self,
+        features: np.ndarray,
+        tokenizer: Tokenizer,
+        options: TranscriptionOptions,
+    ):
+        batch_size = features.shape[0]
+
+        prompt = self.model.get_prompt(
+            tokenizer,
+            previous_tokens=(
+                tokenizer.encode(options.initial_prompt)
+                if options.initial_prompt is not None
+                else []
+            ),
+            without_timestamps=options.without_timestamps,
+            hotwords=options.hotwords,
+        )
+
+        if options.max_new_tokens is not None:
+            max_length = len(prompt) + options.max_new_tokens
+        else:
+            max_length = self.model.max_length
+
+        if max_length > self.model.max_length:
+            raise ValueError(
+                f"The length of the prompt is {len(prompt)}, and the `max_new_tokens` "
+                f"{max_length - len(prompt)}. Thus, the combined length of the prompt "
+                f"and `max_new_tokens` is: {max_length}. This exceeds the "
+                f"`max_length` of the Whisper model: {self.model.max_length}. "
+                "You should either reduce the length of your prompt, or "
+                "reduce the value of `max_new_tokens`, "
+                f"so that their combined length is less that {self.model.max_length}."
+            )
+
+        encoder_output = self.model.encode(features)
+        prompts = [prompt.copy() for _ in range(batch_size)]
+
+        if options.multilingual:
+            language_tokens = [
+                tokenizer.tokenizer.token_to_id(segment_langs[0][0])
+                for segment_langs in self.model.model.detect_language(encoder_output)
+            ]
+            language_token_index = prompt.index(tokenizer.language)
+
+            for i, language_token in enumerate(language_tokens):
+                prompts[i][language_token_index] = language_token
+
+        results = self.model.model.generate(
+            encoder_output,
+            prompts,
+            beam_size=options.beam_size,
+            patience=options.patience,
+            length_penalty=options.length_penalty,
+            max_length=max_length,
+            suppress_blank=options.suppress_blank,
+            suppress_tokens=options.suppress_tokens,
+            return_scores=True,
+            return_no_speech_prob=True,
+            sampling_temperature=options.temperatures[0],
+            repetition_penalty=options.repetition_penalty,
+            no_repeat_ngram_size=options.no_repeat_ngram_size,
+        )
+
+        output = []
+        for result in results:
+            # return scores
+            seq_len = len(result.sequences_ids[0])
+            cum_logprob = result.scores[0] * (seq_len**options.length_penalty)
+
+            output.append(
+                dict(
+                    avg_logprob=cum_logprob / (seq_len + 1),
+                    no_speech_prob=result.no_speech_prob,
+                    tokens=result.sequences_ids[0],
+                )
+            )
+
+        return encoder_output, output
+
+    def transcribe(
+        self,
+        audio: Union[str, BinaryIO, np.ndarray],
+        language: Optional[str] = None,
+        task: str = "transcribe",
+        log_progress: bool = False,
+        beam_size: int = 5,
+        best_of: int = 5,
+        patience: float = 1,
+        length_penalty: float = 1,
+        repetition_penalty: float = 1,
+        no_repeat_ngram_size: int = 0,
+        temperature: Union[float, List[float], Tuple[float, ...]] = [
+            0.0,
+            0.2,
+            0.4,
+            0.6,
+            0.8,
+            1.0,
+        ],
+        compression_ratio_threshold: Optional[float] = 2.4,
+        log_prob_threshold: Optional[float] = -1.0,
+        no_speech_threshold: Optional[float] = 0.6,
+        condition_on_previous_text: bool = True,
+        prompt_reset_on_temperature: float = 0.5,
+        initial_prompt: Optional[Union[str, Iterable[int]]] = None,
+        prefix: Optional[str] = None,
+        suppress_blank: bool = True,
+        suppress_tokens: Optional[List[int]] = [-1],
+        without_timestamps: bool = True,
+        max_initial_timestamp: float = 1.0,
+        word_timestamps: bool = False,
+        prepend_punctuations: str = "\"'“¿([{-",
+        append_punctuations: str = "\"'.。,,!!??::”)]}、",
+        multilingual: bool = False,
+        vad_filter: bool = True,
+        vad_parameters: Optional[Union[dict, VadOptions]] = None,
+        max_new_tokens: Optional[int] = None,
+        chunk_length: Optional[int] = None,
+        clip_timestamps: Optional[List[dict]] = None,
+        hallucination_silence_threshold: Optional[float] = None,
+        batch_size: int = 8,
+        hotwords: Optional[str] = None,
+        language_detection_threshold: Optional[float] = 0.5,
+        language_detection_segments: int = 1,
+    ) -> Tuple[Iterable[Segment], TranscriptionInfo]:
+        """transcribe audio in chunks in batched fashion and return with language info.
+
+        Arguments:
+            audio: Path to the input file (or a file-like object), or the audio waveform.
+            language: The language spoken in the audio. It should be a language code such
+                as "en" or "fr". If not set, the language will be detected in the first 30 seconds
+                of audio.
+            task: Task to execute (transcribe or translate).
+            log_progress: whether to show progress bar or not.
+            beam_size: Beam size to use for decoding.
+            best_of: Number of candidates when sampling with non-zero temperature.
+            patience: Beam search patience factor.
+            length_penalty: Exponential length penalty constant.
+            repetition_penalty: Penalty applied to the score of previously generated tokens
+                (set > 1 to penalize).
+            no_repeat_ngram_size: Prevent repetitions of ngrams with this size (set 0 to disable).
+            temperature: Temperature for sampling. If a list or tuple is passed,
+                only the first value is used.
+            initial_prompt: Optional text string or iterable of token ids to provide as a
+                prompt for the each window.
+            suppress_blank: Suppress blank outputs at the beginning of the sampling.
+            suppress_tokens: List of token IDs to suppress. -1 will suppress a default set
+                of symbols as defined in `tokenizer.non_speech_tokens()`.
+            without_timestamps: Only sample text tokens.
+            word_timestamps: Extract word-level timestamps using the cross-attention pattern
+                and dynamic time warping, and include the timestamps for each word in each segment.
+                Set as False.
+            prepend_punctuations: If word_timestamps is True, merge these punctuation symbols
+                with the next word
+            append_punctuations: If word_timestamps is True, merge these punctuation symbols
+                with the previous word
+            multilingual: Perform language detection on every segment.
+            vad_filter: Enable the voice activity detection (VAD) to filter out parts of the audio
+                without speech. This step is using the Silero VAD model
+                https://github.com/snakers4/silero-vad.
+            vad_parameters: Dictionary of Silero VAD parameters or VadOptions class (see available
+                parameters and default values in the class `VadOptions`).
+            max_new_tokens: Maximum number of new tokens to generate per-chunk. If not set,
+                the maximum will be set by the default max_length.
+            chunk_length: The length of audio segments. If it is not None, it will overwrite the
+                default chunk_length of the FeatureExtractor.
+            clip_timestamps: Optionally provide list of dictionaries each containing "start" and
+                "end" keys that specify the start and end of the voiced region within
+                `chunk_length` boundary. vad_filter will be ignored if clip_timestamps is used.
+            batch_size: the maximum number of parallel requests to model for decoding.
+            hotwords:
+                Hotwords/hint phrases to the model. Has no effect if prefix is not None.
+            language_detection_threshold: If the maximum probability of the language tokens is
+                higher than this value, the language is detected.
+            language_detection_segments: Number of segments to consider for the language detection.
+
+        Unused Arguments
+            compression_ratio_threshold: If the gzip compression ratio is above this value,
+                treat as failed.
+            log_prob_threshold: If the average log probability over sampled tokens is
+                below this value, treat as failed.
+            no_speech_threshold: If the no_speech probability is higher than this value AND
+                the average log probability over sampled tokens is below `log_prob_threshold`,
+                consider the segment as silent.
+            condition_on_previous_text: If True, the previous output of the model is provided
+                as a prompt for the next window; disabling may make the text inconsistent across
+                windows, but the model becomes less prone to getting stuck in a failure loop,
+                such as repetition looping or timestamps going out of sync. Set as False
+            prompt_reset_on_temperature: Resets prompt if temperature is above this value.
+                Arg has effect only if condition_on_previous_text is True. Set at 0.5
+            prefix: Optional text to provide as a prefix at the beginning of each window.
+            max_initial_timestamp: The initial timestamp cannot be later than this, set at 0.0.
+            hallucination_silence_threshold: Optional[float]
+                When word_timestamps is True, skip silent periods longer than this threshold
+                (in seconds) when a possible hallucination is detected. set as None.
+        Returns:
+          A tuple with:
+
+            - a generator over transcribed segments
+            - an instance of TranscriptionInfo
+        """
+
+        sampling_rate = self.model.feature_extractor.sampling_rate
+
+        if multilingual and not self.model.model.is_multilingual:
+            self.model.logger.warning(
+                "The current model is English-only but the multilingual parameter is set to"
+                "True; setting to False instead."
+            )
+            multilingual = False
+
+        if not isinstance(audio, np.ndarray):
+            audio = decode_audio(audio, sampling_rate=sampling_rate)
+        duration = audio.shape[0] / sampling_rate
+
+        self.model.logger.info(
+            "Processing audio with duration %s", format_timestamp(duration)
+        )
+
+        chunk_length = chunk_length or self.model.feature_extractor.chunk_length
+        # if no segment split is provided, use vad_model and generate segments
+        if not clip_timestamps:
+            if vad_filter:
+                if vad_parameters is None:
+                    vad_parameters = VadOptions(
+                        max_speech_duration_s=chunk_length,
+                        min_silence_duration_ms=160,
+                    )
+                elif isinstance(vad_parameters, dict):
+                    if "max_speech_duration_s" in vad_parameters.keys():
+                        vad_parameters.pop("max_speech_duration_s")
+
+                    vad_parameters = VadOptions(
+                        **vad_parameters, max_speech_duration_s=chunk_length
+                    )
+
+                active_segments = get_speech_timestamps(audio, vad_parameters)
+                clip_timestamps = merge_segments(active_segments, vad_parameters)
+            # run the audio if it is less than 30 sec even without clip_timestamps
+            elif duration < chunk_length:
+                clip_timestamps = [{"start": 0, "end": audio.shape[0]}]
+            else:
+                raise RuntimeError(
+                    "No clip timestamps found. "
+                    "Set 'vad_filter' to True or provide 'clip_timestamps'."
+                )
+
+        duration_after_vad = (
+            sum((segment["end"] - segment["start"]) for segment in clip_timestamps)
+            / sampling_rate
+        )
+
+        self.model.logger.info(
+            "VAD filter removed %s of audio",
+            format_timestamp(duration - duration_after_vad),
+        )
+
+        audio_chunks, chunks_metadata = collect_chunks(audio, clip_timestamps)
+        features = (
+            [self.model.feature_extractor(chunk)[..., :-1] for chunk in audio_chunks]
+            if duration_after_vad
+            else []
+        )
+
+        all_language_probs = None
+        # detecting the language if not provided
+        if language is None:
+            if not self.model.model.is_multilingual:
+                language = "en"
+                language_probability = 1
+            else:
+                (
+                    language,
+                    language_probability,
+                    all_language_probs,
+                ) = self.model.detect_language(
+                    features=np.concatenate(
+                        features
+                        + [
+                            np.full((self.model.model.n_mels, 1), -1.5, dtype="float32")
+                        ],
+                        axis=1,
+                    ),  # add a dummy feature to account for empty audio
+                    language_detection_segments=language_detection_segments,
+                    language_detection_threshold=language_detection_threshold,
+                )
+
+                self.model.logger.info(
+                    "Detected language '%s' with probability %.2f",
+                    language,
+                    language_probability,
+                )
+        else:
+            if not self.model.model.is_multilingual and language != "en":
+                self.model.logger.warning(
+                    "The current model is English-only but the language parameter is set to '%s'; "
+                    "using 'en' instead." % language
+                )
+                language = "en"
+
+            language_probability = 1
+
+        tokenizer = Tokenizer(
+            self.model.hf_tokenizer,
+            self.model.model.is_multilingual,
+            task=task,
+            language=language,
+        )
+
+        features = (
+            np.stack([pad_or_trim(feature) for feature in features]) if features else []
+        )
+
+        options = TranscriptionOptions(
+            beam_size=beam_size,
+            best_of=best_of,
+            patience=patience,
+            length_penalty=length_penalty,
+            repetition_penalty=repetition_penalty,
+            no_repeat_ngram_size=no_repeat_ngram_size,
+            log_prob_threshold=log_prob_threshold,
+            no_speech_threshold=no_speech_threshold,
+            compression_ratio_threshold=compression_ratio_threshold,
+            temperatures=(
+                temperature[:1]
+                if isinstance(temperature, (list, tuple))
+                else [temperature]
+            ),
+            initial_prompt=initial_prompt,
+            prefix=prefix,
+            suppress_blank=suppress_blank,
+            suppress_tokens=(
+                get_suppressed_tokens(tokenizer, suppress_tokens)
+                if suppress_tokens
+                else suppress_tokens
+            ),
+            prepend_punctuations=prepend_punctuations,
+            append_punctuations=append_punctuations,
+            max_new_tokens=max_new_tokens,
+            hotwords=hotwords,
+            word_timestamps=word_timestamps,
+            hallucination_silence_threshold=None,
+            condition_on_previous_text=False,
+            clip_timestamps=clip_timestamps,
+            prompt_reset_on_temperature=0.5,
+            multilingual=multilingual,
+            without_timestamps=without_timestamps,
+            max_initial_timestamp=0.0,
+        )
+
+        info = TranscriptionInfo(
+            language=language,
+            language_probability=language_probability,
+            duration=duration,
+            duration_after_vad=duration_after_vad,
+            transcription_options=options,
+            vad_options=vad_parameters,
+            all_language_probs=all_language_probs,
+        )
+
+        segments = self._batched_segments_generator(
+            features,
+            tokenizer,
+            chunks_metadata,
+            batch_size,
+            options,
+            log_progress,
+        )
+
+        return segments, info
+
+    def _batched_segments_generator(
+        self, features, tokenizer, chunks_metadata, batch_size, options, log_progress
+    ):
+        pbar = tqdm(total=len(features), disable=not log_progress, position=0)
+        seg_idx = 0
+        for i in range(0, len(features), batch_size):
+            results = self.forward(
+                features[i : i + batch_size],
+                tokenizer,
+                chunks_metadata[i : i + batch_size],
+                options,
+            )
+
+            for result in results:
+                for segment in result:
+                    seg_idx += 1
+                    yield Segment(
+                        seek=segment["seek"],
+                        id=seg_idx,
+                        text=segment["text"],
+                        start=round(segment["start"], 3),
+                        end=round(segment["end"], 3),
+                        words=(
+                            None
+                            if not options.word_timestamps
+                            else [Word(**word) for word in segment["words"]]
+                        ),
+                        tokens=segment["tokens"],
+                        avg_logprob=segment["avg_logprob"],
+                        no_speech_prob=segment["no_speech_prob"],
+                        compression_ratio=segment["compression_ratio"],
+                        temperature=options.temperatures[0],
+                    )
+
+                pbar.update(1)
+
+        pbar.close()
+        self.last_speech_timestamp = 0.0
+
+
+class WhisperModel:
+    def __init__(
+        self,
+        model_size_or_path: str,
+        device: str = "auto",
+        device_index: Union[int, List[int]] = 0,
+        compute_type: str = "default",
+        cpu_threads: int = 0,
+        num_workers: int = 1,
+        download_root: Optional[str] = None,
+        local_files_only: bool = False,
+        files: dict = None,
+        revision: Optional[str] = None,
+        **model_kwargs,
+    ):
+        """Initializes the Whisper model.
+
+        Args:
+          model_size_or_path: Size of the model to use (tiny, tiny.en, base, base.en,
+            small, small.en, distil-small.en, medium, medium.en, distil-medium.en, large-v1,
+            large-v2, large-v3, large, distil-large-v2, distil-large-v3, large-v3-turbo, or turbo),
+            a path to a converted model directory, or a CTranslate2-converted Whisper model ID from
+            the HF Hub. When a size or a model ID is configured, the converted model is downloaded
+            from the Hugging Face Hub.
+          device: Device to use for computation ("cpu", "cuda", "auto").
+          device_index: Device ID to use.
+            The model can also be loaded on multiple GPUs by passing a list of IDs
+            (e.g. [0, 1, 2, 3]). In that case, multiple transcriptions can run in parallel
+            when transcribe() is called from multiple Python threads (see also num_workers).
+          compute_type: Type to use for computation.
+            See https://opennmt.net/CTranslate2/quantization.html.
+          cpu_threads: Number of threads to use when running on CPU (4 by default).
+            A non zero value overrides the OMP_NUM_THREADS environment variable.
+          num_workers: When transcribe() is called from multiple Python threads,
+            having multiple workers enables true parallelism when running the model
+            (concurrent calls to self.model.generate() will run in parallel).
+            This can improve the global throughput at the cost of increased memory usage.
+          download_root: Directory where the models should be saved. If not set, the models
+            are saved in the standard Hugging Face cache directory.
+          local_files_only:  If True, avoid downloading the file and return the path to the
+            local cached file if it exists.
+          files: Load model files from the memory. This argument is a dictionary mapping file names
+            to file contents as file-like or bytes objects. If this is set, model_path acts as an
+            identifier for this model.
+          revision:
+            An optional Git revision id which can be a branch name, a tag, or a
+            commit hash.
+        """
+        self.logger = get_logger()
+
+        tokenizer_bytes, preprocessor_bytes = None, None
+        if files:
+            model_path = model_size_or_path
+            tokenizer_bytes = files.pop("tokenizer.json", None)
+            preprocessor_bytes = files.pop("preprocessor_config.json", None)
+        elif os.path.isdir(model_size_or_path):
+            model_path = model_size_or_path
+        else:
+            model_path = download_model(
+                model_size_or_path,
+                local_files_only=local_files_only,
+                cache_dir=download_root,
+                revision=revision,
+            )
+
+        self.model = ctranslate2.models.Whisper(
+            model_path,
+            device=device,
+            device_index=device_index,
+            compute_type=compute_type,
+            intra_threads=cpu_threads,
+            inter_threads=num_workers,
+            files=files,
+            **model_kwargs,
+        )
+
+        tokenizer_file = os.path.join(model_path, "tokenizer.json")
+        if tokenizer_bytes:
+            self.hf_tokenizer = tokenizers.Tokenizer.from_buffer(tokenizer_bytes)
+        elif os.path.isfile(tokenizer_file):
+            self.hf_tokenizer = tokenizers.Tokenizer.from_file(tokenizer_file)
+        else:
+            self.hf_tokenizer = tokenizers.Tokenizer.from_pretrained(
+                "openai/whisper-tiny" + ("" if self.model.is_multilingual else ".en")
+            )
+        self.feat_kwargs = self._get_feature_kwargs(model_path, preprocessor_bytes)
+        self.feature_extractor = FeatureExtractor(**self.feat_kwargs)
+        self.input_stride = 2
+        self.num_samples_per_token = (
+            self.feature_extractor.hop_length * self.input_stride
+        )
+        self.frames_per_second = (
+            self.feature_extractor.sampling_rate // self.feature_extractor.hop_length
+        )
+        self.tokens_per_second = (
+            self.feature_extractor.sampling_rate // self.num_samples_per_token
+        )
+        self.time_precision = 0.02
+        self.max_length = 448
+
+    @property
+    def supported_languages(self) -> List[str]:
+        """The languages supported by the model."""
+        return list(_LANGUAGE_CODES) if self.model.is_multilingual else ["en"]
+
+    def _get_feature_kwargs(self, model_path, preprocessor_bytes=None) -> dict:
+        config = {}
+        try:
+            config_path = os.path.join(model_path, "preprocessor_config.json")
+            if preprocessor_bytes:
+                config = json.loads(preprocessor_bytes)
+            elif os.path.isfile(config_path):
+                with open(config_path, "r", encoding="utf-8") as file:
+                    config = json.load(file)
+            else:
+                return config
+            valid_keys = signature(FeatureExtractor.__init__).parameters.keys()
+            return {k: v for k, v in config.items() if k in valid_keys}
+        except json.JSONDecodeError as e:
+            self.logger.warning("Could not load preprocessor config: %s", e)
+
+        return config
+
+    def transcribe(
+        self,
+        audio: Union[str, BinaryIO, np.ndarray],
+        language: Optional[str] = None,
+        task: str = "transcribe",
+        log_progress: bool = False,
+        beam_size: int = 5,
+        best_of: int = 5,
+        patience: float = 1,
+        length_penalty: float = 1,
+        repetition_penalty: float = 1,
+        no_repeat_ngram_size: int = 0,
+        temperature: Union[float, List[float], Tuple[float, ...]] = [
+            0.0,
+            0.2,
+            0.4,
+            0.6,
+            0.8,
+            1.0,
+        ],
+        compression_ratio_threshold: Optional[float] = 2.4,
+        log_prob_threshold: Optional[float] = -1.0,
+        no_speech_threshold: Optional[float] = 0.6,
+        condition_on_previous_text: bool = True,
+        prompt_reset_on_temperature: float = 0.5,
+        initial_prompt: Optional[Union[str, Iterable[int]]] = None,
+        prefix: Optional[str] = None,
+        suppress_blank: bool = True,
+        suppress_tokens: Optional[List[int]] = [-1],
+        without_timestamps: bool = False,
+        max_initial_timestamp: float = 1.0,
+        word_timestamps: bool = False,
+        prepend_punctuations: str = "\"'“¿([{-",
+        append_punctuations: str = "\"'.。,,!!??::”)]}、",
+        multilingual: bool = False,
+        vad_filter: bool = False,
+        vad_parameters: Optional[Union[dict, VadOptions]] = None,
+        max_new_tokens: Optional[int] = None,
+        chunk_length: Optional[int] = None,
+        clip_timestamps: Union[str, List[float]] = "0",
+        hallucination_silence_threshold: Optional[float] = None,
+        hotwords: Optional[str] = None,
+        language_detection_threshold: Optional[float] = 0.5,
+        language_detection_segments: int = 1,
+    ) -> Tuple[Iterable[Segment], TranscriptionInfo]:
+        """Transcribes an input file.
+
+        Arguments:
+          audio: Path to the input file (or a file-like object), or the audio waveform.
+          language: The language spoken in the audio. It should be a language code such
+            as "en" or "fr". If not set, the language will be detected in the first 30 seconds
+            of audio.
+          task: Task to execute (transcribe or translate).
+          log_progress: whether to show progress bar or not.
+          beam_size: Beam size to use for decoding.
+          best_of: Number of candidates when sampling with non-zero temperature.
+          patience: Beam search patience factor.
+          length_penalty: Exponential length penalty constant.
+          repetition_penalty: Penalty applied to the score of previously generated tokens
+            (set > 1 to penalize).
+          no_repeat_ngram_size: Prevent repetitions of ngrams with this size (set 0 to disable).
+          temperature: Temperature for sampling. It can be a tuple of temperatures,
+            which will be successively used upon failures according to either
+            `compression_ratio_threshold` or `log_prob_threshold`.
+          compression_ratio_threshold: If the gzip compression ratio is above this value,
+            treat as failed.
+          log_prob_threshold: If the average log probability over sampled tokens is
+            below this value, treat as failed.
+          no_speech_threshold: If the no_speech probability is higher than this value AND
+            the average log probability over sampled tokens is below `log_prob_threshold`,
+            consider the segment as silent.
+          condition_on_previous_text: If True, the previous output of the model is provided
+            as a prompt for the next window; disabling may make the text inconsistent across
+            windows, but the model becomes less prone to getting stuck in a failure loop,
+            such as repetition looping or timestamps going out of sync.
+          prompt_reset_on_temperature: Resets prompt if temperature is above this value.
+            Arg has effect only if condition_on_previous_text is True.
+          initial_prompt: Optional text string or iterable of token ids to provide as a
+            prompt for the first window.
+          prefix: Optional text to provide as a prefix for the first window.
+          suppress_blank: Suppress blank outputs at the beginning of the sampling.
+          suppress_tokens: List of token IDs to suppress. -1 will suppress a default set
+            of symbols as defined in `tokenizer.non_speech_tokens()`.
+          without_timestamps: Only sample text tokens.
+          max_initial_timestamp: The initial timestamp cannot be later than this.
+          word_timestamps: Extract word-level timestamps using the cross-attention pattern
+            and dynamic time warping, and include the timestamps for each word in each segment.
+          prepend_punctuations: If word_timestamps is True, merge these punctuation symbols
+            with the next word
+          append_punctuations: If word_timestamps is True, merge these punctuation symbols
+            with the previous word
+          multilingual: Perform language detection on every segment.
+          vad_filter: Enable the voice activity detection (VAD) to filter out parts of the audio
+            without speech. This step is using the Silero VAD model
+            https://github.com/snakers4/silero-vad.
+          vad_parameters: Dictionary of Silero VAD parameters or VadOptions class (see available
+            parameters and default values in the class `VadOptions`).
+          max_new_tokens: Maximum number of new tokens to generate per-chunk. If not set,
+            the maximum will be set by the default max_length.
+          chunk_length: The length of audio segments. If it is not None, it will overwrite the
+            default chunk_length of the FeatureExtractor.
+          clip_timestamps:
+            Comma-separated list start,end,start,end,... timestamps (in seconds) of clips to
+             process. The last end timestamp defaults to the end of the file.
+             vad_filter will be ignored if clip_timestamps is used.
+          hallucination_silence_threshold:
+            When word_timestamps is True, skip silent periods longer than this threshold
+             (in seconds) when a possible hallucination is detected
+          hotwords:
+            Hotwords/hint phrases to provide the model with. Has no effect if prefix is not None.
+          language_detection_threshold: If the maximum probability of the language tokens is higher
+           than this value, the language is detected.
+          language_detection_segments: Number of segments to consider for the language detection.
+        Returns:
+          A tuple with:
+
+            - a generator over transcribed segments
+            - an instance of TranscriptionInfo
+        """
+        sampling_rate = self.feature_extractor.sampling_rate
+
+        if multilingual and not self.model.is_multilingual:
+            self.logger.warning(
+                "The current model is English-only but the multilingual parameter is set to"
+                "True; setting to False instead."
+            )
+            multilingual = False
+
+        if not isinstance(audio, np.ndarray):
+            audio = decode_audio(audio, sampling_rate=sampling_rate)
+
+        duration = audio.shape[0] / sampling_rate
+        duration_after_vad = duration
+
+        self.logger.info(
+            "Processing audio with duration %s", format_timestamp(duration)
+        )
+
+        if vad_filter and clip_timestamps == "0":
+            if vad_parameters is None:
+                vad_parameters = VadOptions()
+            elif isinstance(vad_parameters, dict):
+                vad_parameters = VadOptions(**vad_parameters)
+            speech_chunks = get_speech_timestamps(audio, vad_parameters)
+            audio_chunks, chunks_metadata = collect_chunks(audio, speech_chunks)
+            audio = np.concatenate(audio_chunks, axis=0)
+            duration_after_vad = audio.shape[0] / sampling_rate
+
+            self.logger.info(
+                "VAD filter removed %s of audio",
+                format_timestamp(duration - duration_after_vad),
+            )
+
+            if self.logger.isEnabledFor(logging.DEBUG):
+                self.logger.debug(
+                    "VAD filter kept the following audio segments: %s",
+                    ", ".join(
+                        "[%s -> %s]"
+                        % (
+                            format_timestamp(chunk["start"] / sampling_rate),
+                            format_timestamp(chunk["end"] / sampling_rate),
+                        )
+                        for chunk in speech_chunks
+                    ),
+                )
+
+        else:
+            speech_chunks = None
+
+        features = self.feature_extractor(audio, chunk_length=chunk_length)
+
+        encoder_output = None
+        all_language_probs = None
+
+        # detecting the language if not provided
+        if language is None:
+            if not self.model.is_multilingual:
+                language = "en"
+                language_probability = 1
+            else:
+                start_timestamp = (
+                    float(clip_timestamps.split(",")[0])
+                    if isinstance(clip_timestamps, str)
+                    else clip_timestamps[0]
+                )
+                content_frames = features.shape[-1] - 1
+                seek = (
+                    int(start_timestamp * self.frames_per_second)
+                    if start_timestamp * self.frames_per_second < content_frames
+                    else 0
+                )
+                (
+                    language,
+                    language_probability,
+                    all_language_probs,
+                ) = self.detect_language(
+                    features=features[..., seek:],
+                    language_detection_segments=language_detection_segments,
+                    language_detection_threshold=language_detection_threshold,
+                )
+
+                self.logger.info(
+                    "Detected language '%s' with probability %.2f",
+                    language,
+                    language_probability,
+                )
+        else:
+            if not self.model.is_multilingual and language != "en":
+                self.logger.warning(
+                    "The current model is English-only but the language parameter is set to '%s'; "
+                    "using 'en' instead." % language
+                )
+                language = "en"
+
+            language_probability = 1
+
+        tokenizer = Tokenizer(
+            self.hf_tokenizer,
+            self.model.is_multilingual,
+            task=task,
+            language=language,
+        )
+
+        options = TranscriptionOptions(
+            beam_size=beam_size,
+            best_of=best_of,
+            patience=patience,
+            length_penalty=length_penalty,
+            repetition_penalty=repetition_penalty,
+            no_repeat_ngram_size=no_repeat_ngram_size,
+            log_prob_threshold=log_prob_threshold,
+            no_speech_threshold=no_speech_threshold,
+            compression_ratio_threshold=compression_ratio_threshold,
+            condition_on_previous_text=condition_on_previous_text,
+            prompt_reset_on_temperature=prompt_reset_on_temperature,
+            temperatures=(
+                temperature if isinstance(temperature, (list, tuple)) else [temperature]
+            ),
+            initial_prompt=initial_prompt,
+            prefix=prefix,
+            suppress_blank=suppress_blank,
+            suppress_tokens=(
+                get_suppressed_tokens(tokenizer, suppress_tokens)
+                if suppress_tokens
+                else suppress_tokens
+            ),
+            without_timestamps=without_timestamps,
+            max_initial_timestamp=max_initial_timestamp,
+            word_timestamps=word_timestamps,
+            prepend_punctuations=prepend_punctuations,
+            append_punctuations=append_punctuations,
+            multilingual=multilingual,
+            max_new_tokens=max_new_tokens,
+            clip_timestamps=clip_timestamps,
+            hallucination_silence_threshold=hallucination_silence_threshold,
+            hotwords=hotwords,
+        )
+
+        segments = self.generate_segments(
+            features, tokenizer, options, log_progress, encoder_output
+        )
+
+        if speech_chunks:
+            segments = restore_speech_timestamps(segments, speech_chunks, sampling_rate)
+
+        info = TranscriptionInfo(
+            language=language,
+            language_probability=language_probability,
+            duration=duration,
+            duration_after_vad=duration_after_vad,
+            transcription_options=options,
+            vad_options=vad_parameters,
+            all_language_probs=all_language_probs,
+        )
+
+        return segments, info
+
+    def _split_segments_by_timestamps(
+        self,
+        tokenizer: Tokenizer,
+        tokens: List[int],
+        time_offset: float,
+        segment_size: int,
+        segment_duration: float,
+        seek: int,
+    ) -> List[List[int]]:
+        current_segments = []
+        single_timestamp_ending = (
+            len(tokens) >= 2 and tokens[-2] < tokenizer.timestamp_begin <= tokens[-1]
+        )
+
+        consecutive_timestamps = [
+            i
+            for i in range(len(tokens))
+            if i > 0
+            and tokens[i] >= tokenizer.timestamp_begin
+            and tokens[i - 1] >= tokenizer.timestamp_begin
+        ]
+
+        if len(consecutive_timestamps) > 0:
+            slices = list(consecutive_timestamps)
+            if single_timestamp_ending:
+                slices.append(len(tokens))
+
+            last_slice = 0
+            for current_slice in slices:
+                sliced_tokens = tokens[last_slice:current_slice]
+                start_timestamp_position = sliced_tokens[0] - tokenizer.timestamp_begin
+                end_timestamp_position = sliced_tokens[-1] - tokenizer.timestamp_begin
+                start_time = (
+                    time_offset + start_timestamp_position * self.time_precision
+                )
+                end_time = time_offset + end_timestamp_position * self.time_precision
+
+                current_segments.append(
+                    dict(
+                        seek=seek,
+                        start=start_time,
+                        end=end_time,
+                        tokens=sliced_tokens,
+                    )
+                )
+                last_slice = current_slice
+
+            if single_timestamp_ending:
+                # single timestamp at the end means no speech after the last timestamp.
+                seek += segment_size
+            else:
+                # otherwise, ignore the unfinished segment and seek to the last timestamp
+                last_timestamp_position = (
+                    tokens[last_slice - 1] - tokenizer.timestamp_begin
+                )
+                seek += last_timestamp_position * self.input_stride
+
+        else:
+            duration = segment_duration
+            timestamps = [
+                token for token in tokens if token >= tokenizer.timestamp_begin
+            ]
+            if len(timestamps) > 0 and timestamps[-1] != tokenizer.timestamp_begin:
+                last_timestamp_position = timestamps[-1] - tokenizer.timestamp_begin
+                duration = last_timestamp_position * self.time_precision
+
+            current_segments.append(
+                dict(
+                    seek=seek,
+                    start=time_offset,
+                    end=time_offset + duration,
+                    tokens=tokens,
+                )
+            )
+
+            seek += segment_size
+
+        return current_segments, seek, single_timestamp_ending
+
+    def generate_segments(
+        self,
+        features: np.ndarray,
+        tokenizer: Tokenizer,
+        options: TranscriptionOptions,
+        log_progress,
+        encoder_output: Optional[ctranslate2.StorageView] = None,
+    ) -> Iterable[Segment]:
+        content_frames = features.shape[-1] - 1
+        content_duration = float(content_frames * self.feature_extractor.time_per_frame)
+
+        if isinstance(options.clip_timestamps, str):
+            options.clip_timestamps = [
+                float(ts)
+                for ts in (
+                    options.clip_timestamps.split(",")
+                    if options.clip_timestamps
+                    else []
+                )
+            ]
+
+        seek_points: List[int] = [
+            round(ts * self.frames_per_second) for ts in options.clip_timestamps
+        ]
+        if len(seek_points) == 0:
+            seek_points.append(0)
+        if len(seek_points) % 2 == 1:
+            seek_points.append(content_frames)
+        seek_clips: List[Tuple[int, int]] = list(
+            zip(seek_points[::2], seek_points[1::2])
+        )
+
+        punctuation = "\"'“¿([{-\"'.。,,!!??::”)]}、"
+
+        idx = 0
+        clip_idx = 0
+        seek = seek_clips[clip_idx][0]
+        all_tokens = []
+        prompt_reset_since = 0
+
+        if options.initial_prompt is not None:
+            if isinstance(options.initial_prompt, str):
+                initial_prompt = " " + options.initial_prompt.strip()
+                initial_prompt_tokens = tokenizer.encode(initial_prompt)
+                all_tokens.extend(initial_prompt_tokens)
+            else:
+                all_tokens.extend(options.initial_prompt)
+
+        pbar = tqdm(total=content_duration, unit="seconds", disable=not log_progress)
+        last_speech_timestamp = 0.0
+        # NOTE: This loop is obscurely flattened to make the diff readable.
+        # A later commit should turn this into a simpler nested loop.
+        # for seek_clip_start, seek_clip_end in seek_clips:
+        #     while seek < seek_clip_end
+        while clip_idx < len(seek_clips):
+            seek_clip_start, seek_clip_end = seek_clips[clip_idx]
+            if seek_clip_end > content_frames:
+                seek_clip_end = content_frames
+            if seek < seek_clip_start:
+                seek = seek_clip_start
+            if seek >= seek_clip_end:
+                clip_idx += 1
+                if clip_idx < len(seek_clips):
+                    seek = seek_clips[clip_idx][0]
+                continue
+            time_offset = seek * self.feature_extractor.time_per_frame
+            window_end_time = float(
+                (seek + self.feature_extractor.nb_max_frames)
+                * self.feature_extractor.time_per_frame
+            )
+            segment_size = min(
+                self.feature_extractor.nb_max_frames,
+                content_frames - seek,
+                seek_clip_end - seek,
+            )
+            segment = features[:, seek : seek + segment_size]
+            segment_duration = segment_size * self.feature_extractor.time_per_frame
+            segment = pad_or_trim(segment)
+
+            if self.logger.isEnabledFor(logging.DEBUG):
+                self.logger.debug(
+                    "Processing segment at %s", format_timestamp(time_offset)
+                )
+
+            previous_tokens = all_tokens[prompt_reset_since:]
+
+            if seek > 0 or encoder_output is None:
+                encoder_output = self.encode(segment)
+
+            if options.multilingual:
+                results = self.model.detect_language(encoder_output)
+                language_token, language_probability = results[0][0]
+                language = language_token[2:-2]
+
+                tokenizer.language = tokenizer.tokenizer.token_to_id(language_token)
+                tokenizer.language_code = language
+
+            prompt = self.get_prompt(
+                tokenizer,
+                previous_tokens,
+                without_timestamps=options.without_timestamps,
+                prefix=options.prefix if seek == 0 else None,
+                hotwords=options.hotwords,
+            )
+
+            (
+                result,
+                avg_logprob,
+                temperature,
+                compression_ratio,
+            ) = self.generate_with_fallback(encoder_output, prompt, tokenizer, options)
+
+            if options.no_speech_threshold is not None:
+                # no voice activity check
+                should_skip = result.no_speech_prob > options.no_speech_threshold
+
+                if (
+                    options.log_prob_threshold is not None
+                    and avg_logprob > options.log_prob_threshold
+                ):
+                    # don't skip if the logprob is high enough, despite the no_speech_prob
+                    should_skip = False
+
+                if should_skip:
+                    self.logger.debug(
+                        "No speech threshold is met (%f > %f)",
+                        result.no_speech_prob,
+                        options.no_speech_threshold,
+                    )
+
+                    # fast-forward to the next segment boundary
+                    seek += segment_size
+                    continue
+
+            tokens = result.sequences_ids[0]
+
+            previous_seek = seek
+
+            # anomalous words are very long/short/improbable
+            def word_anomaly_score(word: dict) -> float:
+                probability = word.get("probability", 0.0)
+                duration = word["end"] - word["start"]
+                score = 0.0
+                if probability < 0.15:
+                    score += 1.0
+                if duration < 0.133:
+                    score += (0.133 - duration) * 15
+                if duration > 2.0:
+                    score += duration - 2.0
+                return score
+
+            def is_segment_anomaly(segment: Optional[dict]) -> bool:
+                if segment is None or not segment["words"]:
+                    return False
+                words = [w for w in segment["words"] if w["word"] not in punctuation]
+                words = words[:8]
+                score = sum(word_anomaly_score(w) for w in words)
+                return score >= 3 or score + 0.01 >= len(words)
+
+            def next_words_segment(segments: List[dict]) -> Optional[dict]:
+                return next((s for s in segments if s["words"]), None)
+
+            (
+                current_segments,
+                seek,
+                single_timestamp_ending,
+            ) = self._split_segments_by_timestamps(
+                tokenizer=tokenizer,
+                tokens=tokens,
+                time_offset=time_offset,
+                segment_size=segment_size,
+                segment_duration=segment_duration,
+                seek=seek,
+            )
+
+            if options.word_timestamps:
+                self.add_word_timestamps(
+                    [current_segments],
+                    tokenizer,
+                    encoder_output,
+                    segment_size,
+                    options.prepend_punctuations,
+                    options.append_punctuations,
+                    last_speech_timestamp=last_speech_timestamp,
+                )
+                if not single_timestamp_ending:
+                    last_word_end = get_end(current_segments)
+                    if last_word_end is not None and last_word_end > time_offset:
+                        seek = round(last_word_end * self.frames_per_second)
+
+                # skip silence before possible hallucinations
+                if options.hallucination_silence_threshold is not None:
+                    threshold = options.hallucination_silence_threshold
+
+                    # if first segment might be a hallucination, skip leading silence
+                    first_segment = next_words_segment(current_segments)
+                    if first_segment is not None and is_segment_anomaly(first_segment):
+                        gap = first_segment["start"] - time_offset
+                        if gap > threshold:
+                            seek = previous_seek + round(gap * self.frames_per_second)
+                            continue
+
+                    # skip silence before any possible hallucination that is surrounded
+                    # by silence or more hallucinations
+                    hal_last_end = last_speech_timestamp
+                    for si in range(len(current_segments)):
+                        segment = current_segments[si]
+                        if not segment["words"]:
+                            continue
+                        if is_segment_anomaly(segment):
+                            next_segment = next_words_segment(
+                                current_segments[si + 1 :]
+                            )
+                            if next_segment is not None:
+                                hal_next_start = next_segment["words"][0]["start"]
+                            else:
+                                hal_next_start = time_offset + segment_duration
+                            silence_before = (
+                                segment["start"] - hal_last_end > threshold
+                                or segment["start"] < threshold
+                                or segment["start"] - time_offset < 2.0
+                            )
+                            silence_after = (
+                                hal_next_start - segment["end"] > threshold
+                                or is_segment_anomaly(next_segment)
+                                or window_end_time - segment["end"] < 2.0
+                            )
+                            if silence_before and silence_after:
+                                seek = round(
+                                    max(time_offset + 1, segment["start"])
+                                    * self.frames_per_second
+                                )
+                                if content_duration - segment["end"] < threshold:
+                                    seek = content_frames
+                                current_segments[si:] = []
+                                break
+                        hal_last_end = segment["end"]
+
+                last_word_end = get_end(current_segments)
+                if last_word_end is not None:
+                    last_speech_timestamp = last_word_end
+            for segment in current_segments:
+                tokens = segment["tokens"]
+                text = tokenizer.decode(tokens)
+
+                if segment["start"] == segment["end"] or not text.strip():
+                    continue
+
+                all_tokens.extend(tokens)
+                idx += 1
+
+                yield Segment(
+                    id=idx,
+                    seek=previous_seek,
+                    start=segment["start"],
+                    end=segment["end"],
+                    text=text,
+                    tokens=tokens,
+                    temperature=temperature,
+                    avg_logprob=avg_logprob,
+                    compression_ratio=compression_ratio,
+                    no_speech_prob=result.no_speech_prob,
+                    words=(
+                        [Word(**word) for word in segment["words"]]
+                        if options.word_timestamps
+                        else None
+                    ),
+                )
+
+            if (
+                not options.condition_on_previous_text
+                or temperature > options.prompt_reset_on_temperature
+            ):
+                if options.condition_on_previous_text:
+                    self.logger.debug(
+                        "Reset prompt. prompt_reset_on_temperature threshold is met %f > %f",
+                        temperature,
+                        options.prompt_reset_on_temperature,
+                    )
+
+                prompt_reset_since = len(all_tokens)
+
+            pbar.update(
+                (min(content_frames, seek) - previous_seek)
+                * self.feature_extractor.time_per_frame,
+            )
+        pbar.close()
+
+    def encode(self, features: np.ndarray) -> ctranslate2.StorageView:
+        # When the model is running on multiple GPUs, the encoder output should be moved
+        # to the CPU since we don't know which GPU will handle the next job.
+        to_cpu = self.model.device == "cuda" and len(self.model.device_index) > 1
+
+        if features.ndim == 2:
+            features = np.expand_dims(features, 0)
+        features = get_ctranslate2_storage(features)
+
+        return self.model.encode(features, to_cpu=to_cpu)
+
+    def generate_with_fallback(
+        self,
+        encoder_output: ctranslate2.StorageView,
+        prompt: List[int],
+        tokenizer: Tokenizer,
+        options: TranscriptionOptions,
+    ) -> Tuple[ctranslate2.models.WhisperGenerationResult, float, float, float]:
+        decode_result = None
+        all_results = []
+        below_cr_threshold_results = []
+
+        max_initial_timestamp_index = int(
+            round(options.max_initial_timestamp / self.time_precision)
+        )
+        if options.max_new_tokens is not None:
+            max_length = len(prompt) + options.max_new_tokens
+        else:
+            max_length = self.max_length
+
+        if max_length > self.max_length:
+            raise ValueError(
+                f"The length of the prompt is {len(prompt)}, and the `max_new_tokens` "
+                f"{max_length - len(prompt)}. Thus, the combined length of the prompt "
+                f"and `max_new_tokens` is: {max_length}. This exceeds the "
+                f"`max_length` of the Whisper model: {self.max_length}. "
+                "You should either reduce the length of your prompt, or "
+                "reduce the value of `max_new_tokens`, "
+                f"so that their combined length is less that {self.max_length}."
+            )
+
+        for temperature in options.temperatures:
+            if temperature > 0:
+                kwargs = {
+                    "beam_size": 1,
+                    "num_hypotheses": options.best_of,
+                    "sampling_topk": 0,
+                    "sampling_temperature": temperature,
+                }
+            else:
+                kwargs = {
+                    "beam_size": options.beam_size,
+                    "patience": options.patience,
+                }
+
+            result = self.model.generate(
+                encoder_output,
+                [prompt],
+                length_penalty=options.length_penalty,
+                repetition_penalty=options.repetition_penalty,
+                no_repeat_ngram_size=options.no_repeat_ngram_size,
+                max_length=max_length,
+                return_scores=True,
+                return_no_speech_prob=True,
+                suppress_blank=options.suppress_blank,
+                suppress_tokens=options.suppress_tokens,
+                max_initial_timestamp_index=max_initial_timestamp_index,
+                **kwargs,
+            )[0]
+
+            tokens = result.sequences_ids[0]
+
+            # Recover the average log prob from the returned score.
+            seq_len = len(tokens)
+            cum_logprob = result.scores[0] * (seq_len**options.length_penalty)
+            avg_logprob = cum_logprob / (seq_len + 1)
+
+            text = tokenizer.decode(tokens).strip()
+            compression_ratio = get_compression_ratio(text)
+
+            decode_result = (
+                result,
+                avg_logprob,
+                temperature,
+                compression_ratio,
+            )
+            all_results.append(decode_result)
+
+            needs_fallback = False
+
+            if options.compression_ratio_threshold is not None:
+                if compression_ratio > options.compression_ratio_threshold:
+                    needs_fallback = True  # too repetitive
+
+                    self.logger.debug(
+                        "Compression ratio threshold is not met with temperature %.1f (%f > %f)",
+                        temperature,
+                        compression_ratio,
+                        options.compression_ratio_threshold,
+                    )
+                else:
+                    below_cr_threshold_results.append(decode_result)
+
+            if (
+                options.log_prob_threshold is not None
+                and avg_logprob < options.log_prob_threshold
+            ):
+                needs_fallback = True  # average log probability is too low
+
+                self.logger.debug(
+                    "Log probability threshold is not met with temperature %.1f (%f < %f)",
+                    temperature,
+                    avg_logprob,
+                    options.log_prob_threshold,
+                )
+
+            if (
+                options.no_speech_threshold is not None
+                and result.no_speech_prob > options.no_speech_threshold
+                and options.log_prob_threshold is not None
+                and avg_logprob < options.log_prob_threshold
+            ):
+                needs_fallback = False  # silence
+
+            if not needs_fallback:
+                break
+        else:
+            # all failed, select the result with the highest average log probability
+            decode_result = max(
+                below_cr_threshold_results or all_results, key=lambda x: x[1]
+            )
+            # to pass final temperature for prompt_reset_on_temperature
+            decode_result = (
+                decode_result[0],
+                decode_result[1],
+                temperature,
+                decode_result[3],
+            )
+
+        return decode_result
+
+    def get_prompt(
+        self,
+        tokenizer: Tokenizer,
+        previous_tokens: List[int],
+        without_timestamps: bool = False,
+        prefix: Optional[str] = None,
+        hotwords: Optional[str] = None,
+    ) -> List[int]:
+        prompt = []
+
+        if previous_tokens or (hotwords and not prefix):
+            prompt.append(tokenizer.sot_prev)
+            if hotwords and not prefix:
+                hotwords_tokens = tokenizer.encode(" " + hotwords.strip())
+                if len(hotwords_tokens) >= self.max_length // 2:
+                    hotwords_tokens = hotwords_tokens[: self.max_length // 2 - 1]
+                prompt.extend(hotwords_tokens)
+            if previous_tokens:
+                prompt.extend(previous_tokens[-(self.max_length // 2 - 1) :])
+
+        prompt.extend(tokenizer.sot_sequence)
+
+        if without_timestamps:
+            prompt.append(tokenizer.no_timestamps)
+
+        if prefix:
+            prefix_tokens = tokenizer.encode(" " + prefix.strip())
+            if len(prefix_tokens) >= self.max_length // 2:
+                prefix_tokens = prefix_tokens[: self.max_length // 2 - 1]
+            if not without_timestamps:
+                prompt.append(tokenizer.timestamp_begin)
+            prompt.extend(prefix_tokens)
+
+        return prompt
+
+    def add_word_timestamps(
+        self,
+        segments: List[dict],
+        tokenizer: Tokenizer,
+        encoder_output: ctranslate2.StorageView,
+        num_frames: int,
+        prepend_punctuations: str,
+        append_punctuations: str,
+        last_speech_timestamp: float,
+    ) -> float:
+        if len(segments) == 0:
+            return
+
+        text_tokens = []
+        text_tokens_per_segment = []
+        for segment in segments:
+            segment_tokens = [
+                [token for token in subsegment["tokens"] if token < tokenizer.eot]
+                for subsegment in segment
+            ]
+            text_tokens.append(list(itertools.chain.from_iterable(segment_tokens)))
+            text_tokens_per_segment.append(segment_tokens)
+
+        alignments = self.find_alignment(
+            tokenizer, text_tokens, encoder_output, num_frames
+        )
+        median_max_durations = []
+        for alignment in alignments:
+            word_durations = np.array(
+                [word["end"] - word["start"] for word in alignment]
+            )
+            word_durations = word_durations[word_durations.nonzero()]
+            median_duration = (
+                np.median(word_durations) if len(word_durations) > 0 else 0.0
+            )
+            median_duration = min(0.7, float(median_duration))
+            max_duration = median_duration * 2
+
+            # hack: truncate long words at sentence boundaries.
+            # a better segmentation algorithm based on VAD should be able to replace this.
+            if len(word_durations) > 0:
+                sentence_end_marks = ".。!!??"
+                # ensure words at sentence boundaries
+                # are not longer than twice the median word duration.
+                for i in range(1, len(alignment)):
+                    if alignment[i]["end"] - alignment[i]["start"] > max_duration:
+                        if alignment[i]["word"] in sentence_end_marks:
+                            alignment[i]["end"] = alignment[i]["start"] + max_duration
+                        elif alignment[i - 1]["word"] in sentence_end_marks:
+                            alignment[i]["start"] = alignment[i]["end"] - max_duration
+
+            merge_punctuations(alignment, prepend_punctuations, append_punctuations)
+            median_max_durations.append((median_duration, max_duration))
+
+        for segment_idx, segment in enumerate(segments):
+            word_index = 0
+            time_offset = segment[0]["seek"] / self.frames_per_second
+            median_duration, max_duration = median_max_durations[segment_idx]
+            for subsegment_idx, subsegment in enumerate(segment):
+                saved_tokens = 0
+                words = []
+
+                while word_index < len(alignments[segment_idx]) and saved_tokens < len(
+                    text_tokens_per_segment[segment_idx][subsegment_idx]
+                ):
+                    timing = alignments[segment_idx][word_index]
+
+                    if timing["word"]:
+                        words.append(
+                            dict(
+                                word=timing["word"],
+                                start=round(time_offset + timing["start"], 2),
+                                end=round(time_offset + timing["end"], 2),
+                                probability=timing["probability"],
+                            )
+                        )
+
+                    saved_tokens += len(timing["tokens"])
+                    word_index += 1
+
+                # hack: truncate long words at segment boundaries.
+                # a better segmentation algorithm based on VAD should be able to replace this.
+                if len(words) > 0:
+                    # ensure the first and second word after a pause is not longer than
+                    # twice the median word duration.
+                    if words[0][
+                        "end"
+                    ] - last_speech_timestamp > median_duration * 4 and (
+                        words[0]["end"] - words[0]["start"] > max_duration
+                        or (
+                            len(words) > 1
+                            and words[1]["end"] - words[0]["start"] > max_duration * 2
+                        )
+                    ):
+                        if (
+                            len(words) > 1
+                            and words[1]["end"] - words[1]["start"] > max_duration
+                        ):
+                            boundary = max(
+                                words[1]["end"] / 2, words[1]["end"] - max_duration
+                            )
+                            words[0]["end"] = words[1]["start"] = boundary
+                        words[0]["start"] = max(0, words[0]["end"] - max_duration)
+
+                    # prefer the segment-level start timestamp if the first word is too long.
+                    if (
+                        subsegment["start"] < words[0]["end"]
+                        and subsegment["start"] - 0.5 > words[0]["start"]
+                    ):
+                        words[0]["start"] = max(
+                            0,
+                            min(words[0]["end"] - median_duration, subsegment["start"]),
+                        )
+                    else:
+                        subsegment["start"] = words[0]["start"]
+
+                    # prefer the segment-level end timestamp if the last word is too long.
+                    if (
+                        subsegment["end"] > words[-1]["start"]
+                        and subsegment["end"] + 0.5 < words[-1]["end"]
+                    ):
+                        words[-1]["end"] = max(
+                            words[-1]["start"] + median_duration, subsegment["end"]
+                        )
+                    else:
+                        subsegment["end"] = words[-1]["end"]
+
+                    last_speech_timestamp = subsegment["end"]
+                segments[segment_idx][subsegment_idx]["words"] = words
+        return last_speech_timestamp
+
+    def find_alignment(
+        self,
+        tokenizer: Tokenizer,
+        text_tokens: List[int],
+        encoder_output: ctranslate2.StorageView,
+        num_frames: int,
+        median_filter_width: int = 7,
+    ) -> List[dict]:
+        if len(text_tokens) == 0:
+            return []
+
+        results = self.model.align(
+            encoder_output,
+            tokenizer.sot_sequence,
+            text_tokens,
+            num_frames,
+            median_filter_width=median_filter_width,
+        )
+        return_list = []
+        for result, text_token in zip(results, text_tokens):
+            text_token_probs = result.text_token_probs
+            alignments = result.alignments
+            text_indices = np.array([pair[0] for pair in alignments])
+            time_indices = np.array([pair[1] for pair in alignments])
+
+            words, word_tokens = tokenizer.split_to_word_tokens(
+                text_token + [tokenizer.eot]
+            )
+            if len(word_tokens) <= 1:
+                # return on eot only
+                # >>> np.pad([], (1, 0))
+                # array([0.])
+                # This results in crashes when we lookup jump_times with float, like
+                # IndexError: arrays used as indices must be of integer (or boolean) type
+                return_list.append([])
+                continue
+            word_boundaries = np.pad(
+                np.cumsum([len(t) for t in word_tokens[:-1]]), (1, 0)
+            )
+            if len(word_boundaries) <= 1:
+                return_list.append([])
+                continue
+
+            jumps = np.pad(np.diff(text_indices), (1, 0), constant_values=1).astype(
+                bool
+            )
+            jump_times = time_indices[jumps] / self.tokens_per_second
+            start_times = jump_times[word_boundaries[:-1]]
+            end_times = jump_times[word_boundaries[1:]]
+            word_probabilities = [
+                np.mean(text_token_probs[i:j])
+                for i, j in zip(word_boundaries[:-1], word_boundaries[1:])
+            ]
+
+            return_list.append(
+                [
+                    dict(
+                        word=word,
+                        tokens=tokens,
+                        start=start,
+                        end=end,
+                        probability=probability,
+                    )
+                    for word, tokens, start, end, probability in zip(
+                        words, word_tokens, start_times, end_times, word_probabilities
+                    )
+                ]
+            )
+        return return_list
+
+    def detect_language(
+        self,
+        audio: Optional[np.ndarray] = None,
+        features: Optional[np.ndarray] = None,
+        vad_filter: bool = False,
+        vad_parameters: Union[dict, VadOptions] = None,
+        language_detection_segments: int = 1,
+        language_detection_threshold: float = 0.5,
+    ) -> Tuple[str, float, List[Tuple[str, float]]]:
+        """
+        Use Whisper to detect the language of the input audio or features.
+
+        Arguments:
+            audio: Input audio signal, must be a 1D float array sampled at 16khz.
+            features: Input Mel spectrogram features, must be a float array with
+                shape (n_mels, n_frames), if `audio` is provided, the features will be ignored.
+                Either `audio` or `features` must be provided.
+            vad_filter: Enable the voice activity detection (VAD) to filter out parts of the audio
+                without speech. This step is using the Silero VAD model.
+            vad_parameters: Dictionary of Silero VAD parameters or VadOptions class (see available
+                parameters and default values in the class `VadOptions`).
+            language_detection_threshold: If the maximum probability of the language tokens is
+                higher than this value, the language is detected.
+            language_detection_segments: Number of segments to consider for the language detection.
+
+        Returns:
+            language: Detected language.
+            languege_probability: Probability of the detected language.
+            all_language_probs: List of tuples with all language names and probabilities.
+        """
+        assert (
+            audio is not None or features is not None
+        ), "Either `audio` or `features` must be provided."
+
+        if audio is not None:
+            if vad_filter:
+                speech_chunks = get_speech_timestamps(audio, vad_parameters)
+                audio_chunks, chunks_metadata = collect_chunks(audio, speech_chunks)
+                audio = np.concatenate(audio_chunks, axis=0)
+
+            audio = audio[
+                : language_detection_segments * self.feature_extractor.n_samples
+            ]
+            features = self.feature_extractor(audio)
+
+        features = features[
+            ..., : language_detection_segments * self.feature_extractor.nb_max_frames
+        ]
+
+        detected_language_info = {}
+        for i in range(0, features.shape[-1], self.feature_extractor.nb_max_frames):
+            encoder_output = self.encode(
+                pad_or_trim(features[..., i : i + self.feature_extractor.nb_max_frames])
+            )
+            # results is a list of tuple[str, float] with language names and probabilities.
+            results = self.model.detect_language(encoder_output)[0]
+
+            # Parse language names to strip out markers
+            all_language_probs = [(token[2:-2], prob) for (token, prob) in results]
+            # Get top language token and probability
+            language, language_probability = all_language_probs[0]
+            if language_probability > language_detection_threshold:
+                break
+            detected_language_info.setdefault(language, []).append(language_probability)
+        else:
+            # If no language detected for all segments, the majority vote of the highest
+            # projected languages for all segments is used to determine the language.
+            language = max(
+                detected_language_info,
+                key=lambda lang: len(detected_language_info[lang]),
+            )
+            language_probability = max(detected_language_info[language])
+
+        return language, language_probability, all_language_probs
+
+
+def restore_speech_timestamps(
+    segments: Iterable[Segment],
+    speech_chunks: List[dict],
+    sampling_rate: int,
+) -> Iterable[Segment]:
+    ts_map = SpeechTimestampsMap(speech_chunks, sampling_rate)
+
+    for segment in segments:
+        if segment.words:
+            words = []
+            for word in segment.words:
+                # Ensure the word start and end times are resolved to the same chunk.
+                middle = (word.start + word.end) / 2
+                chunk_index = ts_map.get_chunk_index(middle)
+                word.start = ts_map.get_original_time(word.start, chunk_index)
+                word.end = ts_map.get_original_time(word.end, chunk_index)
+                words.append(word)
+
+            segment.start = words[0].start
+            segment.end = words[-1].end
+            segment.words = words
+
+        else:
+            segment.start = ts_map.get_original_time(segment.start)
+            segment.end = ts_map.get_original_time(segment.end)
+
+        yield segment
+
+
+def get_ctranslate2_storage(segment: np.ndarray) -> ctranslate2.StorageView:
+    segment = np.ascontiguousarray(segment)
+    segment = ctranslate2.StorageView.from_array(segment)
+    return segment
+
+
+def get_compression_ratio(text: str) -> float:
+    text_bytes = text.encode("utf-8")
+    return len(text_bytes) / len(zlib.compress(text_bytes))
+
+
+def get_suppressed_tokens(
+    tokenizer: Tokenizer,
+    suppress_tokens: Tuple[int],
+) -> Optional[List[int]]:
+    if -1 in suppress_tokens:
+        suppress_tokens = [t for t in suppress_tokens if t >= 0]
+        suppress_tokens.extend(tokenizer.non_speech_tokens)
+    elif suppress_tokens is None or len(suppress_tokens) == 0:
+        suppress_tokens = []  # interpret empty string as an empty list
+    else:
+        assert isinstance(suppress_tokens, list), "suppress_tokens must be a list"
+
+    suppress_tokens.extend(
+        [
+            tokenizer.transcribe,
+            tokenizer.translate,
+            tokenizer.sot,
+            tokenizer.sot_prev,
+            tokenizer.sot_lm,
+        ]
+    )
+
+    return tuple(sorted(set(suppress_tokens)))
+
+
+def merge_punctuations(alignment: List[dict], prepended: str, appended: str) -> None:
+    # merge prepended punctuations
+    i = len(alignment) - 2
+    j = len(alignment) - 1
+    while i >= 0:
+        previous = alignment[i]
+        following = alignment[j]
+        if previous["word"].startswith(" ") and previous["word"].strip() in prepended:
+            # prepend it to the following word
+            following["word"] = previous["word"] + following["word"]
+            following["tokens"] = previous["tokens"] + following["tokens"]
+            previous["word"] = ""
+            previous["tokens"] = []
+        else:
+            j = i
+        i -= 1
+
+    # merge appended punctuations
+    i = 0
+    j = 1
+    while j < len(alignment):
+        previous = alignment[i]
+        following = alignment[j]
+        if not previous["word"].endswith(" ") and following["word"] in appended:
+            # append it to the previous word
+            previous["word"] = previous["word"] + following["word"]
+            previous["tokens"] = previous["tokens"] + following["tokens"]
+            following["word"] = ""
+            following["tokens"] = []
+        else:
+            i = j
+        j += 1

+ 163 - 0
faster_whisper/utils.py

@@ -0,0 +1,163 @@
+import logging
+import os
+import re
+
+from typing import List, Optional
+
+import huggingface_hub
+import requests
+
+from tqdm.auto import tqdm
+
+_MODELS = {
+    "tiny.en": "Systran/faster-whisper-tiny.en",
+    "tiny": "Systran/faster-whisper-tiny",
+    "base.en": "Systran/faster-whisper-base.en",
+    "base": "Systran/faster-whisper-base",
+    "small.en": "Systran/faster-whisper-small.en",
+    "small": "Systran/faster-whisper-small",
+    "medium.en": "Systran/faster-whisper-medium.en",
+    "medium": "Systran/faster-whisper-medium",
+    "large-v1": "Systran/faster-whisper-large-v1",
+    "large-v2": "Systran/faster-whisper-large-v2",
+    "large-v3": "Systran/faster-whisper-large-v3",
+    "large": "Systran/faster-whisper-large-v3",
+    "distil-large-v2": "Systran/faster-distil-whisper-large-v2",
+    "distil-medium.en": "Systran/faster-distil-whisper-medium.en",
+    "distil-small.en": "Systran/faster-distil-whisper-small.en",
+    "distil-large-v3": "Systran/faster-distil-whisper-large-v3",
+    "large-v3-turbo": "mobiuslabsgmbh/faster-whisper-large-v3-turbo",
+    "turbo": "mobiuslabsgmbh/faster-whisper-large-v3-turbo",
+}
+
+
+def available_models() -> List[str]:
+    """Returns the names of available models."""
+    return list(_MODELS.keys())
+
+
+def get_assets_path():
+    """Returns the path to the assets directory."""
+    return os.path.join(os.path.dirname(os.path.abspath(__file__)), "assets")
+
+
+def get_logger():
+    """Returns the module logger."""
+    return logging.getLogger("faster_whisper")
+
+
+def download_model(
+    size_or_id: str,
+    output_dir: Optional[str] = None,
+    local_files_only: bool = False,
+    cache_dir: Optional[str] = None,
+    revision: Optional[str] = None,
+):
+    """Downloads a CTranslate2 Whisper model from the Hugging Face Hub.
+
+    Args:
+      size_or_id: Size of the model to download from https://huggingface.co/Systran
+        (tiny, tiny.en, base, base.en, small, small.en, distil-small.en, medium, medium.en,
+        distil-medium.en, large-v1, large-v2, large-v3, large, distil-large-v2,
+        distil-large-v3), or a CTranslate2-converted model ID from the Hugging Face Hub
+        (e.g. Systran/faster-whisper-large-v3).
+      output_dir: Directory where the model should be saved. If not set, the model is saved in
+        the cache directory.
+      local_files_only:  If True, avoid downloading the file and return the path to the local
+        cached file if it exists.
+      cache_dir: Path to the folder where cached files are stored.
+      revision: An optional Git revision id which can be a branch name, a tag, or a
+            commit hash.
+
+    Returns:
+      The path to the downloaded model.
+
+    Raises:
+      ValueError: if the model size is invalid.
+    """
+    if re.match(r".*/.*", size_or_id):
+        repo_id = size_or_id
+    else:
+        repo_id = _MODELS.get(size_or_id)
+        if repo_id is None:
+            raise ValueError(
+                "Invalid model size '%s', expected one of: %s"
+                % (size_or_id, ", ".join(_MODELS.keys()))
+            )
+
+    allow_patterns = [
+        "config.json",
+        "preprocessor_config.json",
+        "model.bin",
+        "tokenizer.json",
+        "vocabulary.*",
+    ]
+
+    kwargs = {
+        "local_files_only": local_files_only,
+        "allow_patterns": allow_patterns,
+        "tqdm_class": disabled_tqdm,
+        "revision": revision,
+    }
+
+    if output_dir is not None:
+        kwargs["local_dir"] = output_dir
+        kwargs["local_dir_use_symlinks"] = False
+
+    if cache_dir is not None:
+        kwargs["cache_dir"] = cache_dir
+
+    try:
+        return huggingface_hub.snapshot_download(repo_id, **kwargs)
+    except (
+        huggingface_hub.utils.HfHubHTTPError,
+        requests.exceptions.ConnectionError,
+    ) as exception:
+        logger = get_logger()
+        logger.warning(
+            "An error occured while synchronizing the model %s from the Hugging Face Hub:\n%s",
+            repo_id,
+            exception,
+        )
+        logger.warning(
+            "Trying to load the model directly from the local cache, if it exists."
+        )
+
+        kwargs["local_files_only"] = True
+        return huggingface_hub.snapshot_download(repo_id, **kwargs)
+
+
+def format_timestamp(
+    seconds: float,
+    always_include_hours: bool = False,
+    decimal_marker: str = ".",
+) -> str:
+    assert seconds >= 0, "non-negative timestamp expected"
+    milliseconds = round(seconds * 1000.0)
+
+    hours = milliseconds // 3_600_000
+    milliseconds -= hours * 3_600_000
+
+    minutes = milliseconds // 60_000
+    milliseconds -= minutes * 60_000
+
+    seconds = milliseconds // 1_000
+    milliseconds -= seconds * 1_000
+
+    hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else ""
+    return (
+        f"{hours_marker}{minutes:02d}:{seconds:02d}{decimal_marker}{milliseconds:03d}"
+    )
+
+
+class disabled_tqdm(tqdm):
+    def __init__(self, *args, **kwargs):
+        kwargs["disable"] = True
+        super().__init__(*args, **kwargs)
+
+
+def get_end(segments: List[dict]) -> Optional[float]:
+    return next(
+        (w["end"] for s in reversed(segments) for w in reversed(s["words"])),
+        segments[-1]["end"] if segments else None,
+    )

+ 372 - 0
faster_whisper/vad.py

@@ -0,0 +1,372 @@
+import bisect
+import functools
+import os
+
+from dataclasses import dataclass
+from typing import Dict, List, Optional, Tuple
+
+import numpy as np
+
+from faster_whisper.utils import get_assets_path
+
+
+# The code below is adapted from https://github.com/snakers4/silero-vad.
+@dataclass
+class VadOptions:
+    """VAD options.
+
+    Attributes:
+      threshold: Speech threshold. Silero VAD outputs speech probabilities for each audio chunk,
+        probabilities ABOVE this value are considered as SPEECH. It is better to tune this
+        parameter for each dataset separately, but "lazy" 0.5 is pretty good for most datasets.
+      neg_threshold: Silence threshold for determining the end of speech. If a probability is lower
+        than neg_threshold, it is always considered silence. Values higher than neg_threshold
+        are only considered speech if the previous sample was classified as speech; otherwise,
+        they are treated as silence. This parameter helps refine the detection of speech
+         transitions, ensuring smoother segment boundaries.
+      min_speech_duration_ms: Final speech chunks shorter min_speech_duration_ms are thrown out.
+      max_speech_duration_s: Maximum duration of speech chunks in seconds. Chunks longer
+        than max_speech_duration_s will be split at the timestamp of the last silence that
+        lasts more than 100ms (if any), to prevent aggressive cutting. Otherwise, they will be
+        split aggressively just before max_speech_duration_s.
+      min_silence_duration_ms: In the end of each speech chunk wait for min_silence_duration_ms
+        before separating it
+      speech_pad_ms: Final speech chunks are padded by speech_pad_ms each side
+    """
+
+    threshold: float = 0.5
+    neg_threshold: float = None
+    min_speech_duration_ms: int = 0
+    max_speech_duration_s: float = float("inf")
+    min_silence_duration_ms: int = 2000
+    speech_pad_ms: int = 400
+
+
+def get_speech_timestamps(
+    audio: np.ndarray,
+    vad_options: Optional[VadOptions] = None,
+    sampling_rate: int = 16000,
+    **kwargs,
+) -> List[dict]:
+    """This method is used for splitting long audios into speech chunks using silero VAD.
+
+    Args:
+      audio: One dimensional float array.
+      vad_options: Options for VAD processing.
+      sampling rate: Sampling rate of the audio.
+      kwargs: VAD options passed as keyword arguments for backward compatibility.
+
+    Returns:
+      List of dicts containing begin and end samples of each speech chunk.
+    """
+    if vad_options is None:
+        vad_options = VadOptions(**kwargs)
+
+    threshold = vad_options.threshold
+    neg_threshold = vad_options.neg_threshold
+    min_speech_duration_ms = vad_options.min_speech_duration_ms
+    max_speech_duration_s = vad_options.max_speech_duration_s
+    min_silence_duration_ms = vad_options.min_silence_duration_ms
+    window_size_samples = 512
+    speech_pad_ms = vad_options.speech_pad_ms
+    min_speech_samples = sampling_rate * min_speech_duration_ms / 1000
+    speech_pad_samples = sampling_rate * speech_pad_ms / 1000
+    max_speech_samples = (
+        sampling_rate * max_speech_duration_s
+        - window_size_samples
+        - 2 * speech_pad_samples
+    )
+    min_silence_samples = sampling_rate * min_silence_duration_ms / 1000
+    min_silence_samples_at_max_speech = sampling_rate * 98 / 1000
+
+    audio_length_samples = len(audio)
+
+    model = get_vad_model()
+
+    padded_audio = np.pad(
+        audio, (0, window_size_samples - audio.shape[0] % window_size_samples)
+    )
+    speech_probs = model(padded_audio.reshape(1, -1)).squeeze(0)
+
+    triggered = False
+    speeches = []
+    current_speech = {}
+    if neg_threshold is None:
+        neg_threshold = max(threshold - 0.15, 0.01)
+
+    # to save potential segment end (and tolerate some silence)
+    temp_end = 0
+    # to save potential segment limits in case of maximum segment size reached
+    prev_end = next_start = 0
+
+    for i, speech_prob in enumerate(speech_probs):
+        if (speech_prob >= threshold) and temp_end:
+            temp_end = 0
+            if next_start < prev_end:
+                next_start = window_size_samples * i
+
+        if (speech_prob >= threshold) and not triggered:
+            triggered = True
+            current_speech["start"] = window_size_samples * i
+            continue
+
+        if (
+            triggered
+            and (window_size_samples * i) - current_speech["start"] > max_speech_samples
+        ):
+            if prev_end:
+                current_speech["end"] = prev_end
+                speeches.append(current_speech)
+                current_speech = {}
+                # previously reached silence (< neg_thres) and is still not speech (< thres)
+                if next_start < prev_end:
+                    triggered = False
+                else:
+                    current_speech["start"] = next_start
+                prev_end = next_start = temp_end = 0
+            else:
+                current_speech["end"] = window_size_samples * i
+                speeches.append(current_speech)
+                current_speech = {}
+                prev_end = next_start = temp_end = 0
+                triggered = False
+                continue
+
+        if (speech_prob < neg_threshold) and triggered:
+            if not temp_end:
+                temp_end = window_size_samples * i
+            # condition to avoid cutting in very short silence
+            if (window_size_samples * i) - temp_end > min_silence_samples_at_max_speech:
+                prev_end = temp_end
+            if (window_size_samples * i) - temp_end < min_silence_samples:
+                continue
+            else:
+                current_speech["end"] = temp_end
+                if (
+                    current_speech["end"] - current_speech["start"]
+                ) > min_speech_samples:
+                    speeches.append(current_speech)
+                current_speech = {}
+                prev_end = next_start = temp_end = 0
+                triggered = False
+                continue
+
+    if (
+        current_speech
+        and (audio_length_samples - current_speech["start"]) > min_speech_samples
+    ):
+        current_speech["end"] = audio_length_samples
+        speeches.append(current_speech)
+
+    for i, speech in enumerate(speeches):
+        if i == 0:
+            speech["start"] = int(max(0, speech["start"] - speech_pad_samples))
+        if i != len(speeches) - 1:
+            silence_duration = speeches[i + 1]["start"] - speech["end"]
+            if silence_duration < 2 * speech_pad_samples:
+                speech["end"] += int(silence_duration // 2)
+                speeches[i + 1]["start"] = int(
+                    max(0, speeches[i + 1]["start"] - silence_duration // 2)
+                )
+            else:
+                speech["end"] = int(
+                    min(audio_length_samples, speech["end"] + speech_pad_samples)
+                )
+                speeches[i + 1]["start"] = int(
+                    max(0, speeches[i + 1]["start"] - speech_pad_samples)
+                )
+        else:
+            speech["end"] = int(
+                min(audio_length_samples, speech["end"] + speech_pad_samples)
+            )
+
+    return speeches
+
+
+def collect_chunks(
+    audio: np.ndarray, chunks: List[dict], sampling_rate: int = 16000
+) -> Tuple[List[np.ndarray], List[Dict[str, int]]]:
+    """Collects audio chunks."""
+    if not chunks:
+        chunk_metadata = {
+            "start_time": 0,
+            "end_time": 0,
+        }
+        return [np.array([], dtype=np.float32)], [chunk_metadata]
+
+    audio_chunks = []
+    chunks_metadata = []
+    for chunk in chunks:
+        chunk_metadata = {
+            "start_time": chunk["start"] / sampling_rate,
+            "end_time": chunk["end"] / sampling_rate,
+        }
+        audio_chunks.append(audio[chunk["start"] : chunk["end"]])
+        chunks_metadata.append(chunk_metadata)
+    return audio_chunks, chunks_metadata
+
+
+class SpeechTimestampsMap:
+    """Helper class to restore original speech timestamps."""
+
+    def __init__(self, chunks: List[dict], sampling_rate: int, time_precision: int = 2):
+        self.sampling_rate = sampling_rate
+        self.time_precision = time_precision
+        self.chunk_end_sample = []
+        self.total_silence_before = []
+
+        previous_end = 0
+        silent_samples = 0
+
+        for chunk in chunks:
+            silent_samples += chunk["start"] - previous_end
+            previous_end = chunk["end"]
+
+            self.chunk_end_sample.append(chunk["end"] - silent_samples)
+            self.total_silence_before.append(silent_samples / sampling_rate)
+
+    def get_original_time(
+        self,
+        time: float,
+        chunk_index: Optional[int] = None,
+    ) -> float:
+        if chunk_index is None:
+            chunk_index = self.get_chunk_index(time)
+
+        total_silence_before = self.total_silence_before[chunk_index]
+        return round(total_silence_before + time, self.time_precision)
+
+    def get_chunk_index(self, time: float) -> int:
+        sample = int(time * self.sampling_rate)
+        return min(
+            bisect.bisect(self.chunk_end_sample, sample),
+            len(self.chunk_end_sample) - 1,
+        )
+
+
+@functools.lru_cache
+def get_vad_model():
+    """Returns the VAD model instance."""
+    encoder_path = os.path.join(get_assets_path(), "silero_encoder_v5.onnx")
+    decoder_path = os.path.join(get_assets_path(), "silero_decoder_v5.onnx")
+    return SileroVADModel(encoder_path, decoder_path)
+
+
+class SileroVADModel:
+    def __init__(self, encoder_path, decoder_path):
+        try:
+            import onnxruntime
+        except ImportError as e:
+            raise RuntimeError(
+                "Applying the VAD filter requires the onnxruntime package"
+            ) from e
+
+        opts = onnxruntime.SessionOptions()
+        opts.inter_op_num_threads = 1
+        opts.intra_op_num_threads = 1
+        opts.enable_cpu_mem_arena = False
+        opts.log_severity_level = 4
+
+        self.encoder_session = onnxruntime.InferenceSession(
+            encoder_path,
+            providers=["CPUExecutionProvider"],
+            sess_options=opts,
+        )
+        self.decoder_session = onnxruntime.InferenceSession(
+            decoder_path,
+            providers=["CPUExecutionProvider"],
+            sess_options=opts,
+        )
+
+    def __call__(
+        self, audio: np.ndarray, num_samples: int = 512, context_size_samples: int = 64
+    ):
+        assert (
+            audio.ndim == 2
+        ), "Input should be a 2D array with size (batch_size, num_samples)"
+        assert (
+            audio.shape[1] % num_samples == 0
+        ), "Input size should be a multiple of num_samples"
+
+        batch_size = audio.shape[0]
+
+        state = np.zeros((2, batch_size, 128), dtype="float32")
+        context = np.zeros(
+            (batch_size, context_size_samples),
+            dtype="float32",
+        )
+
+        batched_audio = audio.reshape(batch_size, -1, num_samples)
+        context = batched_audio[..., -context_size_samples:]
+        context[:, -1] = 0
+        context = np.roll(context, 1, 1)
+        batched_audio = np.concatenate([context, batched_audio], 2)
+
+        batched_audio = batched_audio.reshape(-1, num_samples + context_size_samples)
+
+        encoder_batch_size = 10000
+        num_segments = batched_audio.shape[0]
+        encoder_outputs = []
+        for i in range(0, num_segments, encoder_batch_size):
+            encoder_output = self.encoder_session.run(
+                None, {"input": batched_audio[i : i + encoder_batch_size]}
+            )[0]
+            encoder_outputs.append(encoder_output)
+
+        encoder_output = np.concatenate(encoder_outputs, axis=0)
+        encoder_output = encoder_output.reshape(batch_size, -1, 128)
+
+        decoder_outputs = []
+        for window in np.split(encoder_output, encoder_output.shape[1], axis=1):
+            out, state = self.decoder_session.run(
+                None, {"input": window.squeeze(1), "state": state}
+            )
+            decoder_outputs.append(out)
+
+        out = np.stack(decoder_outputs, axis=1).squeeze(-1)
+        return out
+
+
+def merge_segments(segments_list, vad_options: VadOptions, sampling_rate: int = 16000):
+    if not segments_list:
+        return []
+
+    curr_end = 0
+    seg_idxs = []
+    merged_segments = []
+    edge_padding = vad_options.speech_pad_ms * sampling_rate // 1000
+    chunk_length = vad_options.max_speech_duration_s * sampling_rate
+
+    curr_start = segments_list[0]["start"]
+
+    for idx, seg in enumerate(segments_list):
+        # if any segment start timing is less than previous segment end timing,
+        # reset the edge padding. Similarly for end timing.
+        if idx > 0:
+            if seg["start"] < segments_list[idx - 1]["end"]:
+                seg["start"] += edge_padding
+        if idx < len(segments_list) - 1:
+            if seg["end"] > segments_list[idx + 1]["start"]:
+                seg["end"] -= edge_padding
+
+        if seg["end"] - curr_start > chunk_length and curr_end - curr_start > 0:
+            merged_segments.append(
+                {
+                    "start": curr_start,
+                    "end": curr_end,
+                    "segments": seg_idxs,
+                }
+            )
+            curr_start = seg["start"]
+            seg_idxs = []
+        curr_end = seg["end"]
+        seg_idxs.append((seg["start"], seg["end"]))
+    # add final
+    merged_segments.append(
+        {
+            "start": curr_start,
+            "end": curr_end,
+            "segments": seg_idxs,
+        }
+    )
+    return merged_segments

+ 3 - 0
faster_whisper/version.py

@@ -0,0 +1,3 @@
+"""Version information."""
+
+__version__ = "1.1.1"

+ 1 - 0
requirements.conversion.txt

@@ -0,0 +1 @@
+transformers[torch]>=4.23

+ 6 - 0
requirements.txt

@@ -0,0 +1,6 @@
+ctranslate2>=4.0,<5
+huggingface_hub>=0.13
+tokenizers>=0.13,<1
+onnxruntime>=1.14,<2 
+av>=11
+tqdm

+ 9 - 0
setup.cfg

@@ -0,0 +1,9 @@
+[flake8]
+max-line-length = 100
+ignore =
+  E203,
+  W503,
+
+[isort]
+profile=black
+lines_between_types=1

+ 67 - 0
setup.py

@@ -0,0 +1,67 @@
+import os
+
+from setuptools import find_packages, setup
+
+base_dir = os.path.dirname(os.path.abspath(__file__))
+
+
+def get_long_description():
+    readme_path = os.path.join(base_dir, "README.md")
+    with open(readme_path, encoding="utf-8") as readme_file:
+        return readme_file.read()
+
+
+def get_project_version():
+    version_path = os.path.join(base_dir, "faster_whisper", "version.py")
+    version = {}
+    with open(version_path, encoding="utf-8") as fp:
+        exec(fp.read(), version)
+    return version["__version__"]
+
+
+def get_requirements(path):
+    with open(path, encoding="utf-8") as requirements:
+        return [requirement.strip() for requirement in requirements]
+
+
+install_requires = get_requirements(os.path.join(base_dir, "requirements.txt"))
+conversion_requires = get_requirements(
+    os.path.join(base_dir, "requirements.conversion.txt")
+)
+
+setup(
+    name="faster-whisper",
+    version=get_project_version(),
+    license="MIT",
+    description="Faster Whisper transcription with CTranslate2",
+    long_description=get_long_description(),
+    long_description_content_type="text/markdown",
+    author="Guillaume Klein",
+    url="https://github.com/SYSTRAN/faster-whisper",
+    classifiers=[
+        "Development Status :: 4 - Beta",
+        "Intended Audience :: Developers",
+        "Intended Audience :: Science/Research",
+        "License :: OSI Approved :: MIT License",
+        "Programming Language :: Python :: 3",
+        "Programming Language :: Python :: 3 :: Only",
+        "Programming Language :: Python :: 3.9",
+        "Programming Language :: Python :: 3.10",
+        "Programming Language :: Python :: 3.11",
+        "Topic :: Scientific/Engineering :: Artificial Intelligence",
+    ],
+    keywords="openai whisper speech ctranslate2 inference quantization transformer",
+    python_requires=">=3.9",
+    install_requires=install_requires,
+    extras_require={
+        "conversion": conversion_requires,
+        "dev": [
+            "black==23.*",
+            "flake8==6.*",
+            "isort==5.*",
+            "pytest==7.*",
+        ],
+    },
+    packages=find_packages(),
+    include_package_data=True,
+)

+ 18 - 0
tests/conftest.py

@@ -0,0 +1,18 @@
+import os
+
+import pytest
+
+
+@pytest.fixture
+def data_dir():
+    return os.path.join(os.path.dirname(os.path.abspath(__file__)), "data")
+
+
+@pytest.fixture
+def jfk_path(data_dir):
+    return os.path.join(data_dir, "jfk.flac")
+
+
+@pytest.fixture
+def physcisworks_path(data_dir):
+    return os.path.join(data_dir, "physicsworks.wav")

BIN
tests/data/hotwords.mp3


BIN
tests/data/jfk.flac


BIN
tests/data/multilingual.mp3


BIN
tests/data/physicsworks.wav


BIN
tests/data/stereo_diarization.wav


+ 120 - 0
tests/test_tokenizer.py

@@ -0,0 +1,120 @@
+from faster_whisper import WhisperModel
+from faster_whisper.tokenizer import Tokenizer
+from faster_whisper.transcribe import get_suppressed_tokens
+
+
+def test_suppressed_tokens_minus_1():
+    model = WhisperModel("tiny.en")
+
+    tokenizer = Tokenizer(model.hf_tokenizer, False)
+    tokens = get_suppressed_tokens(tokenizer, [-1])
+    assert tokens == (
+        1,
+        2,
+        7,
+        8,
+        9,
+        10,
+        14,
+        25,
+        26,
+        27,
+        28,
+        29,
+        31,
+        58,
+        59,
+        60,
+        61,
+        62,
+        63,
+        90,
+        91,
+        92,
+        93,
+        357,
+        366,
+        438,
+        532,
+        685,
+        705,
+        796,
+        930,
+        1058,
+        1220,
+        1267,
+        1279,
+        1303,
+        1343,
+        1377,
+        1391,
+        1635,
+        1782,
+        1875,
+        2162,
+        2361,
+        2488,
+        3467,
+        4008,
+        4211,
+        4600,
+        4808,
+        5299,
+        5855,
+        6329,
+        7203,
+        9609,
+        9959,
+        10563,
+        10786,
+        11420,
+        11709,
+        11907,
+        13163,
+        13697,
+        13700,
+        14808,
+        15306,
+        16410,
+        16791,
+        17992,
+        19203,
+        19510,
+        20724,
+        22305,
+        22935,
+        27007,
+        30109,
+        30420,
+        33409,
+        34949,
+        40283,
+        40493,
+        40549,
+        47282,
+        49146,
+        50257,
+        50357,
+        50358,
+        50359,
+        50360,
+    )
+
+
+def test_suppressed_tokens_minus_value():
+    model = WhisperModel("tiny.en")
+
+    tokenizer = Tokenizer(model.hf_tokenizer, False)
+    tokens = get_suppressed_tokens(tokenizer, [13])
+    assert tokens == (13, 50257, 50357, 50358, 50359, 50360)
+
+
+def test_split_on_unicode():
+    model = WhisperModel("tiny")
+    tokenizer = Tokenizer(model.hf_tokenizer, False)
+
+    tokens = [8404, 871, 287, 6, 246, 526, 3210, 20378]
+    words, word_tokens = tokenizer.split_tokens_on_unicode(tokens)
+
+    assert words == [" elle", " est", " l", "'", "\ufffd", "é", "rit", "oire"]
+    assert word_tokens == [[8404], [871], [287], [6], [246], [526], [3210], [20378]]

+ 271 - 0
tests/test_transcribe.py

@@ -0,0 +1,271 @@
+import inspect
+import os
+
+import numpy as np
+
+from faster_whisper import BatchedInferencePipeline, WhisperModel, decode_audio
+
+
+def test_supported_languages():
+    model = WhisperModel("tiny.en")
+    assert model.supported_languages == ["en"]
+
+
+def test_transcribe(jfk_path):
+    model = WhisperModel("tiny")
+    segments, info = model.transcribe(jfk_path, word_timestamps=True)
+    assert info.all_language_probs is not None
+
+    assert info.language == "en"
+    assert info.language_probability > 0.9
+    assert info.duration == 11
+
+    # Get top language info from all results, which should match the
+    # already existing metadata
+    top_lang, top_lang_score = info.all_language_probs[0]
+    assert info.language == top_lang
+    assert abs(info.language_probability - top_lang_score) < 1e-16
+
+    segments = list(segments)
+
+    assert len(segments) == 1
+
+    segment = segments[0]
+
+    assert segment.text == (
+        " And so my fellow Americans, ask not what your country can do for you, "
+        "ask what you can do for your country."
+    )
+
+    assert segment.text == "".join(word.word for word in segment.words)
+    assert segment.start == segment.words[0].start
+    assert segment.end == segment.words[-1].end
+    batched_model = BatchedInferencePipeline(model=model)
+    result, info = batched_model.transcribe(
+        jfk_path, word_timestamps=True, vad_filter=False
+    )
+    assert info.language == "en"
+    assert info.language_probability > 0.7
+    segments = []
+    for segment in result:
+        segments.append(
+            {"start": segment.start, "end": segment.end, "text": segment.text}
+        )
+
+    assert len(segments) == 1
+    assert segment.text == (
+        " And so my fellow Americans ask not what your country can do for you, "
+        "ask what you can do for your country."
+    )
+
+
+def test_batched_transcribe(physcisworks_path):
+    model = WhisperModel("tiny")
+    batched_model = BatchedInferencePipeline(model=model)
+    result, info = batched_model.transcribe(physcisworks_path, batch_size=16)
+    assert info.language == "en"
+    assert info.language_probability > 0.7
+    segments = []
+    for segment in result:
+        segments.append(
+            {"start": segment.start, "end": segment.end, "text": segment.text}
+        )
+    # number of near 30 sec segments
+    assert len(segments) == 7
+
+    result, info = batched_model.transcribe(
+        physcisworks_path,
+        batch_size=16,
+        without_timestamps=False,
+        word_timestamps=True,
+    )
+    segments = []
+    for segment in result:
+        assert segment.words is not None
+        segments.append(
+            {"start": segment.start, "end": segment.end, "text": segment.text}
+        )
+    assert len(segments) > 7
+
+
+def test_empty_audio():
+    audio = np.asarray([], dtype="float32")
+    model = WhisperModel("tiny")
+    pipeline = BatchedInferencePipeline(model=model)
+    assert list(model.transcribe(audio)[0]) == []
+    assert list(pipeline.transcribe(audio)[0]) == []
+    model.detect_language(audio)
+
+
+def test_prefix_with_timestamps(jfk_path):
+    model = WhisperModel("tiny")
+    segments, _ = model.transcribe(jfk_path, prefix="And so my fellow Americans")
+    segments = list(segments)
+
+    assert len(segments) == 1
+
+    segment = segments[0]
+
+    assert segment.text == (
+        " And so my fellow Americans, ask not what your country can do for you, "
+        "ask what you can do for your country."
+    )
+
+    assert segment.start == 0
+    assert 10 < segment.end <= 11
+
+
+def test_vad(jfk_path):
+    model = WhisperModel("tiny")
+    segments, info = model.transcribe(
+        jfk_path,
+        vad_filter=True,
+        vad_parameters=dict(min_silence_duration_ms=500, speech_pad_ms=200),
+    )
+    segments = list(segments)
+
+    assert len(segments) == 1
+    segment = segments[0]
+
+    assert segment.text == (
+        " And so my fellow Americans ask not what your country can do for you, "
+        "ask what you can do for your country."
+    )
+
+    assert 0 < segment.start < 1
+    assert 10 < segment.end < 11
+
+    assert info.vad_options.min_silence_duration_ms == 500
+    assert info.vad_options.speech_pad_ms == 200
+
+
+def test_stereo_diarization(data_dir):
+    model = WhisperModel("tiny")
+
+    audio_path = os.path.join(data_dir, "stereo_diarization.wav")
+    left, right = decode_audio(audio_path, split_stereo=True)
+
+    segments, _ = model.transcribe(left)
+    transcription = "".join(segment.text for segment in segments).strip()
+    assert transcription == (
+        "He began a confused complaint against the wizard, "
+        "who had vanished behind the curtain on the left."
+    )
+
+    segments, _ = model.transcribe(right)
+    transcription = "".join(segment.text for segment in segments).strip()
+    assert transcription == "The horizon seems extremely distant."
+
+
+def test_multilingual_transcription(data_dir):
+    model = WhisperModel("tiny")
+    pipeline = BatchedInferencePipeline(model)
+
+    audio_path = os.path.join(data_dir, "multilingual.mp3")
+    audio = decode_audio(audio_path)
+
+    segments, info = model.transcribe(
+        audio,
+        multilingual=True,
+        without_timestamps=True,
+        condition_on_previous_text=False,
+    )
+    segments = list(segments)
+
+    assert (
+        segments[0].text
+        == " Permission is hereby granted, free of charge, to any person obtaining a copy of the"
+        " software and associated documentation files to deal in the software without restriction,"
+        " including without limitation the rights to use, copy, modify, merge, publish, distribute"
+        ", sublicence, and or cell copies of the software, and to permit persons to whom the "
+        "software is furnished to do so, subject to the following conditions. The above copyright"
+        " notice and this permission notice, shall be included in all copies or substantial "
+        "portions of the software."
+    )
+
+    assert (
+        segments[1].text
+        == " Jedem, der dieses Software und die dazu gehöregen Dokumentationsdatein erhält, wird "
+        "hiermit unengeltlich die Genehmigung erteilt, wird der Software und eingeschränkt zu "
+        "verfahren. Dies umfasst insbesondere das Recht, die Software zu verwenden, zu "
+        "vervielfältigen, zu modifizieren, zu Samenzofügen, zu veröffentlichen, zu verteilen, "
+        "unterzulizenzieren und oder kopieren der Software zu verkaufen und diese Rechte "
+        "unterfolgen den Bedingungen anderen zu übertragen."
+    )
+
+    segments, info = pipeline.transcribe(audio, multilingual=True)
+    segments = list(segments)
+
+    assert (
+        segments[0].text
+        == " Permission is hereby granted, free of charge, to any person obtaining a copy of the"
+        " software and associated documentation files to deal in the software without restriction,"
+        " including without limitation the rights to use, copy, modify, merge, publish, distribute"
+        ", sublicence, and or cell copies of the software, and to permit persons to whom the "
+        "software is furnished to do so, subject to the following conditions. The above copyright"
+        " notice and this permission notice, shall be included in all copies or substantial "
+        "portions of the software."
+    )
+    assert (
+        "Dokumentationsdatein erhält, wird hiermit unengeltlich die Genehmigung erteilt,"
+        " wird der Software und eingeschränkt zu verfahren. Dies umfasst insbesondere das Recht,"
+        " die Software zu verwenden, zu vervielfältigen, zu modifizieren"
+        in segments[1].text
+    )
+
+
+def test_hotwords(data_dir):
+    model = WhisperModel("tiny")
+    pipeline = BatchedInferencePipeline(model)
+
+    audio_path = os.path.join(data_dir, "hotwords.mp3")
+    audio = decode_audio(audio_path)
+
+    segments, info = model.transcribe(audio, hotwords="ComfyUI")
+    segments = list(segments)
+
+    assert "ComfyUI" in segments[0].text
+    assert info.transcription_options.hotwords == "ComfyUI"
+
+    segments, info = pipeline.transcribe(audio, hotwords="ComfyUI")
+    segments = list(segments)
+
+    assert "ComfyUI" in segments[0].text
+    assert info.transcription_options.hotwords == "ComfyUI"
+
+
+def test_transcribe_signature():
+    model_transcribe_args = set(inspect.getargs(WhisperModel.transcribe.__code__).args)
+    pipeline_transcribe_args = set(
+        inspect.getargs(BatchedInferencePipeline.transcribe.__code__).args
+    )
+    pipeline_transcribe_args.remove("batch_size")
+
+    assert model_transcribe_args == pipeline_transcribe_args
+
+
+def test_monotonic_timestamps(physcisworks_path):
+    model = WhisperModel("tiny")
+    pipeline = BatchedInferencePipeline(model=model)
+
+    segments, info = model.transcribe(physcisworks_path, word_timestamps=True)
+    segments = list(segments)
+
+    for i in range(len(segments) - 1):
+        assert segments[i].start <= segments[i].end
+        assert segments[i].end <= segments[i + 1].start
+        for word in segments[i].words:
+            assert word.start <= word.end
+            assert word.end <= segments[i].end
+    assert segments[-1].end <= info.duration
+
+    segments, info = pipeline.transcribe(physcisworks_path, word_timestamps=True)
+    segments = list(segments)
+
+    for i in range(len(segments) - 1):
+        assert segments[i].start <= segments[i].end
+        assert segments[i].end <= segments[i + 1].start
+        for word in segments[i].words:
+            assert word.start <= word.end
+            assert word.end <= segments[i].end
+    assert segments[-1].end <= info.duration

+ 29 - 0
tests/test_utils.py

@@ -0,0 +1,29 @@
+import os
+
+from faster_whisper import available_models, download_model
+
+
+def test_available_models():
+    models = available_models()
+    assert isinstance(models, list)
+    assert "tiny" in models
+
+
+def test_download_model(tmpdir):
+    output_dir = str(tmpdir.join("model"))
+
+    model_dir = download_model("tiny", output_dir=output_dir)
+
+    assert model_dir == output_dir
+    assert os.path.isdir(model_dir)
+    assert not os.path.islink(model_dir)
+
+    for filename in os.listdir(model_dir):
+        path = os.path.join(model_dir, filename)
+        assert not os.path.islink(path)
+
+
+def test_download_model_in_cache(tmpdir):
+    cache_dir = str(tmpdir.join("model"))
+    download_model("tiny", cache_dir=cache_dir)
+    assert os.path.isdir(cache_dir)