Add task categories and sample usage to dataset card

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +217 -199
README.md CHANGED
@@ -1,199 +1,217 @@
1
- ---
2
- license: mit
3
- homepage: https://microsoft.github.io/AVGen-Bench/
4
- configs:
5
- - config_name: default
6
- data_files:
7
- - split: train
8
- path: metadata.parquet
9
- ---
10
-
11
- # AVGen-Bench Generated Videos Data Card
12
-
13
- ## Overview
14
-
15
- This data card describes the generated audio-video outputs stored directly in the repository root by model directory.
16
-
17
- The collection is intended for **benchmarking and qualitative/quantitative evaluation** of text-to-audio-video (T2AV) systems. It is not a training dataset. Each item is a model-generated video produced from a prompt defined in `prompts/*.json`.
18
-
19
- [![Project Page](https://img.shields.io/badge/Project%20Page-AVGenBench-8dbb3c?style=for-the-badge&labelColor=4c4c4c)](http://aka.ms/avgenbench)
20
- [![Code Repository](https://img.shields.io/badge/Code%20Repository-GitHub-24292f?style=for-the-badge&logo=github&logoColor=white)](https://github.com/microsoft/AVGen-Bench)
21
- [![Paper](https://img.shields.io/badge/Paper-arXiv-b31b1b?style=for-the-badge&labelColor=4c4c4c)](https://arxiv.org/abs/2604.08540)
22
-
23
- For Hugging Face Hub compatibility, the repository includes a root-level `metadata.parquet` file so the Dataset Viewer can expose each video as a structured row with prompt metadata instead of treating the repo as an unindexed file dump.
24
- The relative video path is stored as a plain string column (`video_path`) rather than a media-typed `file_name` column, which avoids current Dataset Viewer post-processing failures on video rows.
25
-
26
- ## What This Dataset Contains
27
-
28
- The dataset is organized by:
29
-
30
- 1. Model directory
31
- 2. Video category
32
- 3. Generated `.mp4` files
33
-
34
- A typical top-level structure is:
35
-
36
- ```text
37
- AVGen-Bench/
38
- ├── Kling_2.6/
39
- ├── LTX-2/
40
- ├── LTX-2.3/
41
- ├── MOVA_360p_Emu3.5/
42
- ├── MOVA_360p_NanoBanana_2/
43
- ├── Ovi_11/
44
- ├── Seedance_1.5_pro/
45
- ├── Sora_2/
46
- ├── Veo_3.1_fast/
47
- ├── Veo_3.1_quality/
48
- ├── Wan_2.2_HunyuanVideo-Foley/
49
- ├── Wan_2.6/
50
- ├── metadata.parquet
51
- ├── prompts/
52
- └── reference_image/ # optional, depending on generation pipeline
53
- ```
54
-
55
- Within each model directory, videos are grouped by category, for example:
56
-
57
- ```text
58
- Veo_3.1_fast/
59
- ├── ads/
60
- ├── animals/
61
- ├── asmr/
62
- ├── chemical_reaction/
63
- ├── cooking/
64
- ├── gameplays/
65
- ├── movie_trailer/
66
- ├── musical_instrument_tutorial/
67
- ├── news/
68
- ├── physical_experiment/
69
- ── sports/
70
- ```
71
-
72
- ## Prompt Coverage
73
-
74
- Prompt definitions are stored in `prompts/*.json`.
75
-
76
- The current prompt set contains **235 prompts** across **11 categories**:
77
-
78
- | Category | Prompt count |
79
- |---|---:|
80
- | `ads` | 20 |
81
- | `animals` | 20 |
82
- | `asmr` | 20 |
83
- | `chemical_reaction` | 20 |
84
- | `cooking` | 20 |
85
- | `gameplays` | 20 |
86
- | `movie_trailer` | 20 |
87
- | `musical_instrument_tutorial` | 35 |
88
- | `news` | 20 |
89
- | `physical_experiment` | 20 |
90
- | `sports` | 20 |
91
-
92
- Prompt JSON entries typically contain:
93
-
94
- - `content`: a short content descriptor used for naming or indexing
95
- - `prompt`: the full generation prompt
96
-
97
-
98
- ## Data Instance Format
99
-
100
- Each generated item is typically:
101
-
102
- - A single `.mp4` file
103
- - Containing model-generated video and, when supported by the model/pipeline, synthesized audio
104
- - Stored under `<model>/<category>/`
105
-
106
- The filename is usually derived from prompt content after sanitization. Exact naming may vary by generation script or provider wrapper.
107
- In the standard export pipeline, the filename is derived from the prompt's `content` field using the following logic:
108
-
109
- ```python
110
- def safe_filename(name: str, max_len: int = 180) -> str:
111
- name = str(name).strip()
112
- name = re.sub(r"[/\\:*?\"<>|\\n\\r\\t]", "_", name)
113
- name = re.sub(r"\\s+", " ", name).strip()
114
- if not name:
115
- name = "untitled"
116
- if len(name) > max_len:
117
- name = name[:max_len].rstrip()
118
- return name
119
- ```
120
-
121
- So the expected output path pattern is:
122
-
123
- ```text
124
- <model>/<category>/<safe_filename(content)>.mp4
125
- ```
126
-
127
- For Dataset Viewer indexing, `metadata.parquet` stores one row per exported video with:
128
-
129
- - `video_path`: relative path to the `.mp4` stored as a plain string
130
- - `model`: model directory name
131
- - `category`: benchmark category
132
- - `content`: prompt short name
133
- - `prompt`: full generation prompt
134
- - `prompt_id`: index inside `prompts/<category>.json`
135
-
136
- ## How The Data Was Produced
137
-
138
- The videos were generated by running different T2AV systems on a shared benchmark prompt set.
139
-
140
- Important properties:
141
-
142
- - All systems are evaluated against the same category structure
143
- - Outputs are model-generated rather than human-recorded
144
- - Different models may expose different generation settings, resolutions, or conditioning mechanisms
145
- - Some pipelines may additionally use first-frame or reference-image inputs, depending on the underlying model
146
-
147
- ## Intended Uses
148
-
149
- This dataset is intended for:
150
-
151
- - Benchmarking T2AV generation systems
152
- - Running AVGen-Bench evaluation scripts
153
- - Comparing failure modes across models
154
- - Qualitative demo curation
155
- - Error analysis by category or prompt type
156
-
157
- ## Out-of-Scope Uses
158
-
159
- This dataset is not intended for:
160
-
161
- - Training a general-purpose video generation model
162
- - Treating model outputs as factual evidence of real-world events
163
- - Safety certification of a model without additional testing
164
- - Any claim that benchmark performance fully captures downstream deployment quality
165
-
166
- ## Known Limitations
167
-
168
- - Outputs are synthetic and inherit the biases and failure modes of the generating models
169
- - Some categories emphasize benchmark stress-testing rather than natural real-world frequency
170
- - File availability may vary across models if a generation job failed, timed out, or was filtered
171
- - Different model providers enforce different safety and moderation policies; some prompts may be rejected during provider-side review, which can lead to missing videos for specific models even when the prompt exists in the benchmark
172
-
173
-
174
- ## Risks and Responsible Use
175
-
176
- Because these are generated videos:
177
-
178
- - Visual realism does not imply factual correctness
179
- - Audio may contain artifacts, intelligibility failures, or misleading synchronization
180
- - Generated content may reflect stereotypes, implausible causal structure, or unsafe outputs inherited from upstream models
181
-
182
- Anyone redistributing results should clearly label them as synthetic model outputs.
183
-
184
- ## Citation
185
-
186
- If you find AVGen-Bench useful, please cite:
187
-
188
- ```bibtex
189
- @misc{zhou2026avgenbenchtaskdrivenbenchmarkmultigranular,
190
- title={AVGen-Bench: A Task-Driven Benchmark for Multi-Granular Evaluation of Text-to-Audio-Video Generation},
191
- author={Ziwei Zhou and Zeyuan Lai and Rui Wang and Yifan Yang and Zhen Xing and Yuqing Yang and Qi Dai and Lili Qiu and Chong Luo},
192
- year={2026},
193
- eprint={2604.08540},
194
- archivePrefix={arXiv},
195
- primaryClass={cs.CV},
196
- url={https://arxiv.org/abs/2604.08540},
197
- }
198
- ```
199
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ homepage: https://microsoft.github.io/AVGen-Bench/
4
+ task_categories:
5
+ - text-to-video
6
+ - text-to-audio
7
+ configs:
8
+ - config_name: default
9
+ data_files:
10
+ - split: train
11
+ path: metadata.parquet
12
+ ---
13
+
14
+ # AVGen-Bench Generated Videos Data Card
15
+
16
+ ## Overview
17
+
18
+ This data card describes the generated audio-video outputs stored directly in the repository root by model directory.
19
+
20
+ The collection is intended for **benchmarking and qualitative/quantitative evaluation** of text-to-audio-video (T2AV) systems. It was presented in the paper [AVGen-Bench: A Task-Driven Benchmark for Multi-Granular Evaluation of Text-to-Audio-Video Generation](https://arxiv.org/abs/2604.08540). It is not a training dataset. Each item is a model-generated video produced from a prompt defined in `prompts/*.json`.
21
+
22
+ [![Project Page](https://img.shields.io/badge/Project%20Page-AVGenBench-8dbb3c?style=for-the-badge&labelColor=4c4c4c)](http://aka.ms/avgenbench)
23
+ [![Code Repository](https://img.shields.io/badge/Code%20Repository-GitHub-24292f?style=for-the-badge&logo=github&logoColor=white)](https://github.com/microsoft/AVGen-Bench)
24
+ [![Paper](https://img.shields.io/badge/Paper-arXiv-b31b1b?style=for-the-badge&labelColor=4c4c4c)](https://arxiv.org/abs/2604.08540)
25
+
26
+ For Hugging Face Hub compatibility, the repository includes a root-level `metadata.parquet` file so the Dataset Viewer can expose each video as a structured row with prompt metadata instead of treating the repo as an unindexed file dump.
27
+ The relative video path is stored as a plain string column (`video_path`) rather than a media-typed `file_name` column, which avoids current Dataset Viewer post-processing failures on video rows.
28
+
29
+ ## Sample Usage
30
+
31
+ As described in the GitHub repository, you can generate videos from the benchmark prompts using the following command:
32
+
33
+ ```bash
34
+ python batch_generate.py \
35
+ --provider sora2 \
36
+ --task_type video_generation \
37
+ --prompts_dir ./prompts \
38
+ --out_dir ./generated_videos/sora2 \
39
+ --concurrency 2 \
40
+ --seconds 12 \
41
+ --size 1280x720
42
+ ```
43
+
44
+ ## What This Dataset Contains
45
+
46
+ The dataset is organized by:
47
+
48
+ 1. Model directory
49
+ 2. Video category
50
+ 3. Generated `.mp4` files
51
+
52
+ A typical top-level structure is:
53
+
54
+ ```text
55
+ AVGen-Bench/
56
+ ├── Kling_2.6/
57
+ ├── LTX-2/
58
+ ├── LTX-2.3/
59
+ ├── MOVA_360p_Emu3.5/
60
+ ├── MOVA_360p_NanoBanana_2/
61
+ ├── Ovi_11/
62
+ ├── Seedance_1.5_pro/
63
+ ├── Sora_2/
64
+ ├── Veo_3.1_fast/
65
+ ├── Veo_3.1_quality/
66
+ ├── Wan_2.2_HunyuanVideo-Foley/
67
+ ├── Wan_2.6/
68
+ ├── metadata.parquet
69
+ ── prompts/
70
+ └── reference_image/ # optional, depending on generation pipeline
71
+ ```
72
+
73
+ Within each model directory, videos are grouped by category, for example:
74
+
75
+ ```text
76
+ Veo_3.1_fast/
77
+ ├── ads/
78
+ ├── animals/
79
+ ├── asmr/
80
+ ├── chemical_reaction/
81
+ ├── cooking/
82
+ ├── gameplays/
83
+ ├── movie_trailer/
84
+ ├── musical_instrument_tutorial/
85
+ ├── news/
86
+ ├── physical_experiment/
87
+ └── sports/
88
+ ```
89
+
90
+ ## Prompt Coverage
91
+
92
+ Prompt definitions are stored in `prompts/*.json`.
93
+
94
+ The current prompt set contains **235 prompts** across **11 categories**:
95
+
96
+ | Category | Prompt count |
97
+ |---|---:|
98
+ | `ads` | 20 |
99
+ | `animals` | 20 |
100
+ | `asmr` | 20 |
101
+ | `chemical_reaction` | 20 |
102
+ | `cooking` | 20 |
103
+ | `gameplays` | 20 |
104
+ | `movie_trailer` | 20 |
105
+ | `musical_instrument_tutorial` | 35 |
106
+ | `news` | 20 |
107
+ | `physical_experiment` | 20 |
108
+ | `sports` | 20 |
109
+
110
+ Prompt JSON entries typically contain:
111
+
112
+ - `content`: a short content descriptor used for naming or indexing
113
+ - `prompt`: the full generation prompt
114
+
115
+
116
+ ## Data Instance Format
117
+
118
+ Each generated item is typically:
119
+
120
+ - A single `.mp4` file
121
+ - Containing model-generated video and, when supported by the model/pipeline, synthesized audio
122
+ - Stored under `<model>/<category>/`
123
+
124
+ The filename is usually derived from prompt content after sanitization. Exact naming may vary by generation script or provider wrapper.
125
+ In the standard export pipeline, the filename is derived from the prompt's `content` field using the following logic:
126
+
127
+ ```python
128
+ def safe_filename(name: str, max_len: int = 180) -> str:
129
+ name = str(name).strip()
130
+ name = re.sub(r"[/\\:*?\"<>|\
131
+ \\r\\t]", "_", name)
132
+ name = re.sub(r"\\s+", " ", name).strip()
133
+ if not name:
134
+ name = "untitled"
135
+ if len(name) > max_len:
136
+ name = name[:max_len].rstrip()
137
+ return name
138
+ ```
139
+
140
+ So the expected output path pattern is:
141
+
142
+ ```text
143
+ <model>/<category>/<safe_filename(content)>.mp4
144
+ ```
145
+
146
+ For Dataset Viewer indexing, `metadata.parquet` stores one row per exported video with:
147
+
148
+ - `video_path`: relative path to the `.mp4` stored as a plain string
149
+ - `model`: model directory name
150
+ - `category`: benchmark category
151
+ - `content`: prompt short name
152
+ - `prompt`: full generation prompt
153
+ - `prompt_id`: index inside `prompts/<category>.json`
154
+
155
+ ## How The Data Was Produced
156
+
157
+ The videos were generated by running different T2AV systems on a shared benchmark prompt set.
158
+
159
+ Important properties:
160
+
161
+ - All systems are evaluated against the same category structure
162
+ - Outputs are model-generated rather than human-recorded
163
+ - Different models may expose different generation settings, resolutions, or conditioning mechanisms
164
+ - Some pipelines may additionally use first-frame or reference-image inputs, depending on the underlying model
165
+
166
+ ## Intended Uses
167
+
168
+ This dataset is intended for:
169
+
170
+ - Benchmarking T2AV generation systems
171
+ - Running AVGen-Bench evaluation scripts
172
+ - Comparing failure modes across models
173
+ - Qualitative demo curation
174
+ - Error analysis by category or prompt type
175
+
176
+ ## Out-of-Scope Uses
177
+
178
+ This dataset is not intended for:
179
+
180
+ - Training a general-purpose video generation model
181
+ - Treating model outputs as factual evidence of real-world events
182
+ - Safety certification of a model without additional testing
183
+ - Any claim that benchmark performance fully captures downstream deployment quality
184
+
185
+ ## Known Limitations
186
+
187
+ - Outputs are synthetic and inherit the biases and failure modes of the generating models
188
+ - Some categories emphasize benchmark stress-testing rather than natural real-world frequency
189
+ - File availability may vary across models if a generation job failed, timed out, or was filtered
190
+ - Different model providers enforce different safety and moderation policies; some prompts may be rejected during provider-side review, which can lead to missing videos for specific models even when the prompt exists in the benchmark
191
+
192
+
193
+ ## Risks and Responsible Use
194
+
195
+ Because these are generated videos:
196
+
197
+ - Visual realism does not imply factual correctness
198
+ - Audio may contain artifacts, intelligibility failures, or misleading synchronization
199
+ - Generated content may reflect stereotypes, implausible causal structure, or unsafe outputs inherited from upstream models
200
+
201
+ Anyone redistributing results should clearly label them as synthetic model outputs.
202
+
203
+ ## Citation
204
+
205
+ If you find AVGen-Bench useful, please cite:
206
+
207
+ ```bibtex
208
+ @misc{zhou2026avgenbenchtaskdrivenbenchmarkmultigranular,
209
+ title={AVGen-Bench: A Task-Driven Benchmark for Multi-Granular Evaluation of Text-to-Audio-Video Generation},
210
+ author={Ziwei Zhou and Zeyuan Lai and Rui Wang and Yifan Yang and Zhen Xing and Yuqing Yang and Qi Dai and Lili Qiu and Chong Luo},
211
+ year={2026},
212
+ eprint={2604.08540},
213
+ archivePrefix={arXiv},
214
+ primaryClass={cs.CV},
215
+ url={https://arxiv.org/abs/2604.08540},
216
+ }
217
+ ```