Datasets
Collection
14 items • Updated
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 5 new columns ({'num_samples', 'bands', 'Dataset', 'attributes', 'split'}) and 3 missing columns ({'updated_at', 'config', 'chunks'}).
This happened while the json dataset builder was generating data using
hf://datasets/snchen1230/USAVars/train/metadata.json (at revision f60dbd003ae015bd1a05df54affd8db4b1bfbf6d)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
Dataset: string
split: string
num_samples: int64
attributes: struct<image: struct<dtype: string, format: string>, treecover: struct<dtype: string, format: string, unit: string>, elevation: struct<dtype: string, format: string, unit: string>, population: struct<dtype: string, format: string, unit: string>>
child 0, image: struct<dtype: string, format: string>
child 0, dtype: string
child 1, format: string
child 1, treecover: struct<dtype: string, format: string, unit: string>
child 0, dtype: string
child 1, format: string
child 2, unit: string
child 2, elevation: struct<dtype: string, format: string, unit: string>
child 0, dtype: string
child 1, format: string
child 2, unit: string
child 3, population: struct<dtype: string, format: string, unit: string>
child 0, dtype: string
child 1, format: string
child 2, unit: string
bands: struct<0: string, 1: string, 2: string, 3: string>
child 0, 0: string
child 1, 1: string
child 2, 2: string
child 3, 3: string
to
{'chunks': [{'chunk_bytes': Value(dtype='int64', id=None), 'chunk_size': Value(dtype='int64', id=None), 'dim': Value(dtype='null', id=None), 'filename': Value(dtype='string', id=None)}], 'config': {'chunk_bytes': Value(dtype='int64', id=None), 'chunk_size': Value(dtype='null', id=None), 'compression': Value(dtype='null', id=None), 'data_format': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'data_spec': Value(dtype='string', id=None), 'encryption': Value(dtype='null', id=None), 'item_loader': Value(dtype='string', id=None)}, 'updated_at': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1412, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 988, in stream_convert_to_parquet
builder._prepare_split(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 5 new columns ({'num_samples', 'bands', 'Dataset', 'attributes', 'split'}) and 3 missing columns ({'updated_at', 'config', 'chunks'}).
This happened while the json dataset builder was generating data using
hf://datasets/snchen1230/USAVars/train/metadata.json (at revision f60dbd003ae015bd1a05df54affd8db4b1bfbf6d)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
chunks list | config dict | updated_at string | Dataset string | split string | num_samples int64 | attributes dict | bands dict |
|---|---|---|---|---|---|---|---|
[
{
"chunk_bytes": 255807806,
"chunk_size": 1251,
"dim": null,
"filename": "chunk-0-0.bin"
},
{
"chunk_bytes": 255934155,
"chunk_size": 1258,
"dim": null,
"filename": "chunk-0-1.bin"
},
{
"chunk_bytes": 255825424,
"chunk_size": 1255,
"dim": null,
"filename": "ch... | {
"chunk_bytes": 256000000,
"chunk_size": null,
"compression": null,
"data_format": [
"bytes",
"pickle",
"pickle",
"pickle"
],
"data_spec": "[1, {\"type\": \"builtins.dict\", \"context\": \"[\\\"hdf5_data\\\", \\\"treecover\\\", \\\"elevation\\\", \\\"population\\\"]\", \"children_spec\": [{\"type\": null, \"context\": null, \"children_spec\": []}, {\"type\": null, \"context\": null, \"children_spec\": []}, {\"type\": null, \"context\": null, \"children_spec\": []}, {\"type\": null, \"context\": null, \"children_spec\": []}]}]",
"encryption": null,
"item_loader": "PyTreeLoader"
} | 1735834532.5502412 | null | null | null | null | null |
null | null | null | USAVars | train | 68,513 | {
"image": {
"dtype": "uint8",
"format": "numpy"
},
"treecover": {
"dtype": "float32",
"format": "numpy",
"unit": "percentage"
},
"elevation": {
"dtype": "float32",
"format": "numpy",
"unit": "meters"
},
"population": {
"dtype": "float32",
"format": "numpy",
"unit": "people_per_square_kilometer"
}
} | {
"0": "Red",
"1": "Green",
"2": "Blue",
"3": "NearInfrared"
} |
No dataset card yet