Dataset Viewer
The dataset viewer is not available for this dataset.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

πŸŽ₯ MLD-VC: Multimodal Dataset for Video Conferencing

When AVSR Meets Video Conferencing: Dataset, Degradation, and the Hidden Mechanism Behind Performance Collapse (CVPR 2026) πŸ“„ [Paper] | πŸ€— [Hugging Face Dataset]


πŸ“Œ Overview

MLD-VC is the first multimodal dataset specifically designed for Audio-Visual Speech Recognition (AVSR) in real-world video conferencing (VC) scenarios.

Unlike traditional AVSR datasets collected in controlled offline environments, MLD-VC explicitly models two critical factors in VC:

  • Transmission Distortions (compression, speech enhancement, etc.)
  • Human Hyper-expression (e.g., Lombard effect)

πŸ” Key Features

  • 🎀 31 speakers, 22.79 hours of recordings
  • 🌐 4 mainstream VC platforms
  • πŸ—£οΈ Bilingual: English & Chinese
  • 🎧 Lombard effect simulation via noise conditions
  • πŸŽ₯ Multimodal data:
    • Video
    • Audio
    • Facial landmarks
    • text

🚨 Motivation

Existing AVSR systems show severe performance degradation in video conferencing, due to:

  • Distribution shift caused by speech enhancement algorithms
  • Behavioral changes such as hyper-expression

MLD-VC is designed to bridge the gap between offline datasets and real-world VC deployment.


πŸ“‚ Dataset Structure

The dataset is organized into three aligned modalities:

MLD-VC/
β”œβ”€β”€ video/
β”œβ”€β”€ audio/
β”œβ”€β”€ landmarks/

Each modality follows the same hierarchical structure:

<modality>/
└── Online / Offline
    └── speaker_id
        └── platform
            └── sentence_id
                └── clean / 40db / 60db / 80db

πŸ“– Example

video/
└── Online/
    └── speaker_03/
        └── Zoom/
            └── sentence_012/
                β”œβ”€β”€ clean/
                β”œβ”€β”€ 40db/
                β”œβ”€β”€ 60db/
                └── 80db/

🧠 Data Description

1. Online vs Offline

  • Offline:
    • Direct recording (no transmission)
    • Contains hyper-expression (via noise)
  • Online:
    • Recorded after transmission through VC platforms
    • Includes:
      • Compression
      • Speech enhancement
      • Network effects

2. Noise Levels (Lombard Effect)

Each sentence is recorded under 4 noise conditions:

Condition Description
clean No noise
40dB Mild noise
60dB Moderate noise
80dB Strong noise

These simulate Lombard effect intensity, inducing hyper-expression.


3. Platforms

The dataset includes recordings from multiple VC platforms (e.g.):

  • Zoom
  • Tencent Meeting
  • Lark
  • DingTalk

⚠️ Important Notes

πŸ” Recording Protocol Differences

  • In Offline subset:
    • Speakers 2–8:
      • Recorded on a single device, repeated across 4 platforms
    • Other speakers:
      • DD platform only, but actually recorded using 4 different devices simultaneously

πŸ‘‰ This leads to:

  • Platform variation β‰  always device variation
  • Be careful in cross-platform generalization experiments

❌ Removed Speakers

  • Speaker 0 and 1 have been removed
    • Due to poor recording quality

πŸ“ Data Consistency

  • All three modalities (video, audio, landmarks) are:
    • Strictly aligned
    • Share identical folder structure
    • Can be indexed jointly

πŸ”¬ Recommended Use Cases

MLD-VC is suitable for:

βœ” AVSR Robustness

  • Evaluate performance under real VC conditions

βœ” Cross-domain Generalization

  • Train on Offline β†’ Test on Online

βœ” Multimodal Learning

  • Audio-visual fusion
  • Landmark-based modeling

βœ” Distribution Shift Analysis

  • Study impact of:
    • Speech enhancement
    • Lombard effect

πŸ“Š Key Findings (from the paper)

  • AVSR models suffer massive degradation in VC
  • Speech enhancement is the main cause of audio distribution shift
  • Lombard effect β‰ˆ VC distortion (in feature space)
  • Landmark-based features are more stable than image features
  • Fine-tuning on MLD-VC reduces CER by 17.5%

πŸ“Ž Citation

If you find this dataset useful, please cite:

@inproceedings{huang2026mldvc,
  title={When AVSR Meets Video Conferencing: Dataset, Degradation, and the Hidden Mechanism Behind Performance Collapse},
  author={Huang, Yihuan and Xue, Jun and Liu, Jiajun and Li, Daixian and Zhang, Tong and Yi, Zhuolin and Ren, Yanzhen and Li, Kai},
  booktitle={CVPR},
  year={2026}
}

πŸ™ Acknowledgements

This work is supported by:

  • National Natural Science Foundation of China
  • DiDi Chuxing Group

πŸ“¬ Contact

If you have questions, feel free to contact:


⭐ Star This Repo

If you find MLD-VC helpful, please consider giving a ⭐!

Downloads last month
108

Paper for nccm2p2/MLD-VC