MAPSS: Manifold-based Assessment of Perceptual Source Separation
Granular evaluation of speech and music source separation with the MAPSS measures:
- Perceptual Matching (PM): Measures how closely an output perceptually aligns with its reference. Range: 0-1, higher is better.
- Perceptual Similarity (PS): Measures how well an output is separated from its interfering references. Range: 0-1, higher is better.
⚠️ IMPORTANT: File Order Requirements
Output files MUST be in the same order as reference files!
- If references are:
speaker1.wav
,speaker2.wav
,speaker3.wav
- Then outputs must be:
output1.wav
,output2.wav
,output3.wav
- Where
output1
corresponds tospeaker1
,output2
tospeaker2
, etc.
Input Format
Upload a ZIP file containing:
your_mixture.zip
├── references/ # Original clean sources
│ ├── speaker1.wav
│ ├── speaker2.wav
│ └── ...
└── outputs/ # Separated outputs (SAME ORDER as references)
├── separated1.wav # Must correspond to speaker1.wav
├── separated2.wav # Must correspond to speaker2.wav
└── ...
Audio Requirements
- Format: .wav files
- Sample rate: Any (automatically resampled to 16kHz)
- Channels: Mono or stereo (converted to mono)
- Number of files: Equal number of references and outputs
- Order: Output files must be in the same order as reference files
Output Format
The tool generates a ZIP file containing:
ps_scores_{model}.csv
: PS scores for each source over timepm_scores_{model}.csv
: PM scores for each source over timeparams.json
: Parameters usedmanifest_canonical.json
: File mapping and processing details
Score Interpretation
- Valid scores: Only computed when at least 2 speakers are active in a frame
- NaN values: Appear for non-active speakers, or when fewer than 2 speakers are active in the frame.
- Time resolution: 20ms frames
Available Models
Model | Description | Default Layer | Use Case |
---|---|---|---|
raw |
Raw waveform features | N/A | Baseline comparison |
wavlm |
WavLM Large | 24 | Strong performance |
wav2vec2 |
Wav2Vec2 Large | 24 | Best overall performance |
hubert |
HuBERT Large | 24 | |
wavlm_base |
WavLM Base | 12 | |
wav2vec2_base |
Wav2Vec2 Base | 12 | Faster, good quality |
hubert_base |
HuBERT Base | 12 | |
wav2vec2_xlsr |
Wav2Vec2 XLSR-53 | 24 | Multilingual |
Parameters
- Model: Select the embedding model for feature extraction
- Layer: Which transformer layer to use (auto-selected by default)
- Alpha: Diffusion maps parameter (0.0-1.0, default: 1.0)
- 0.0 = No normalization
- 1.0 = Full normalization (recommended)
Processing Notes
- The system automatically detects which speakers are active in each frame
- PS/PM scores are only computed between active speakers
- Processing time scales with number of sources and audio length
- GPU acceleration is automatically used when available
- Note: This Hugging Face Space runs with a single GPU allocation
Citation
If you use MAPSS, please cite:
@article{ivry2025mapss,
title={MAPSS: Manifold-based Assessment of Perceptual Source Separation},
author={Ivry, Amir and Cornell, Samuele and Watanabe, Shinji},
journal={arXiv preprint arXiv:2509.09212},
year={2025}
}
License
Code: MIT License
Paper: CC-BY-4.0
Support
For issues, questions, or contributions, please visit the GitHub repository.
Select embedding model
0 12
0 1