Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found blab_long_audio.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1167, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found blab_long_audio.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

BLAB: Brutally Long Audio Bench

Dataset Summary

Brutally Long Audio Bench (BLAB) is a challenging long-form audio benchmark that evaluates audio LMs on localization, duration estimation, emotion, and counting tasks using audio segments averaging 51 minutes in length. BLAB consists of 833+ hours of diverse, full-length Youtube audio clips, each paired with human-annotated, text-based natural language questions and answers. Our audio data were collected from permissively licensed sources and underwent a human-assisted filtering process to ensure task compliance.

NB: This data should only be used for evaluation purposes and not for model training.

Tasks Covered in BLAB

Localization

  • Word Localization: Locate the exact start and end times of specific words within the audio.
  • Named Entity Localization: Detect and locate the exact start and end times of named entities (e.g., people, organizations, locations).
  • Advertisement Localization: Locate and transcribe advertisement segments within a podcast.

Counting

  • Speaker Number Estimation: Determine the number of unique speakers present in the full audio segment.

Duration

  • Event Duration: Calculate the duration of specific acoustic events (e.g., laughter in a comedy special, question-and-answer segments in a panel session, or a particular speaker’s total speaking time in a meeting) within an audio sample,.
  • Entire Duration: Estimate the total duration of an audio file, expressed in seconds.

Emotion

  • Emotion Reasoning: Reason over emotional expressions conveyed in the audio.
  • Emotion Ranking: Rank different emotional expressions of speech and non-verbal sound present in the audio.

Dataset Structure

To load a specific task from BLAB, you'll need to specify its configuration name. Keep in mind that BLAB provides URLs to the YouTube audio files, not the actual audio files themselves. You'll need to download the audio from these URLs separately.

from datasets import load_dataset

# Load the Word Localization task
word_localization_data = load_dataset("oreva/blab_long_audio", "word_localization")


# Load the Named Entity Localization task
named_entity_localization_data = load_dataset("oreva/blab_long_audio", "named_entity_localization")

# You can load any other task similarly:
# speaker_number_estimation_data = load_dataset("oreva/blab_long_audio", "speaker_number_estimation")
# entire_duration_data = load_dataset("oreva/blab_long_audio", "entire_duration")
# event_duration_data = load_dataset("oreva/blab_long_audio", "event_duration")
# emotion_reasoning_data = load_dataset("oreva/blab_long_audio", "emotion_reasoning")
# emotion_ranking_data = load_dataset("oreva/blab_long_audio", "emotion_ranking")

Citation

@misc{ahia2025blabbrutallylongaudio,
      title={BLAB: Brutally Long Audio Bench},
      author={Orevaoghene Ahia and Martijn Bartelds and Kabir Ahuja and Hila Gonen and Valentin Hofmann and Siddhant Arora and Shuyue Stella Li and Vishal Puttagunta and Mofetoluwa Adeyemi and Charishma Buchireddy and Ben Walls and Noah Bennett and Shinji Watanabe and Noah A. Smith and Yulia Tsvetkov and Sachin Kumar},
      year={2025},
      eprint={2505.03054},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2505.03054},
}
Downloads last month
46

Paper for oreva/blab_long_audio