Datasets:

Dataset Viewer
Auto-converted to Parquet Duplicate
dataset_id
stringclasses
1 value
title
stringclasses
1 value
source
stringclasses
1 value
source_url
stringclasses
1 value
doi
stringclasses
1 value
license
stringclasses
1 value
loader
dict
catalog
stringclasses
1 value
generated_by
stringclasses
1 value
nm000134
Alljoined-1.6M
nemar
https://openneuro.org/datasets/nm000134
10.82901/nemar.nm000134
CC-BY-NC-ND-4.0
{ "library": "eegdash", "class": "EEGDashDataset", "kwargs": { "dataset": "nm000134" } }
https://huggingface.co/spaces/EEGDash/catalog
huggingface-space/scripts/push_metadata_stubs.py

Alljoined-1.6M

Dataset ID: nm000134

Xu2025

Canonical aliases: Alljoined16M · Alljoined_16M · Alljoined1p6M

At a glance: EEG · 20 subjects · 1525 recordings · CC-BY-NC-ND-4.0

Load this dataset

This repo is a pointer. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); EEGDash streams it on demand and returns a PyTorch / braindecode dataset.

# pip install eegdash
from eegdash import EEGDashDataset

ds = EEGDashDataset(dataset="nm000134", cache_dir="./cache")
print(len(ds), "recordings")

You can also load it by canonical alias — these are registered classes in eegdash.dataset:

from eegdash.dataset import Alljoined16M
ds = Alljoined16M(cache_dir="./cache")

If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly:

from braindecode.datasets import BaseConcatDataset
ds = BaseConcatDataset.pull_from_hub("EEGDash/nm000134")

Dataset metadata

Subjects 20
Recordings 1525
Tasks (count) 1
Channels 32 (×1525)
Sampling rate (Hz) 256 (×1525)
Total duration (h) 129.5
Size on disk 8.2 GB
Recording type EEG
Source nemar
License CC-BY-NC-ND-4.0

Links


Auto-generated from dataset_summary.csv and the EEGDash API. Do not edit this file by hand — update the upstream source and re-run scripts/push_metadata_stubs.py.

Downloads last month
33