Dataset Viewer issue: TypeError: Couldn't cast array

#5
by albertvillanova HF staff - opened

The dataset viewer is not working.

Error details:

Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type
              struct<identifier: int64, comment: string, is_minor_edit: bool, editor: struct<identifier: int64, name: string, is_anonymous: bool, edit_count: int64, groups: list<item: string>, is_patroller: bool, date_started: timestamp[s], is_admin: bool>, number_of_characters: int64, size: struct<value: int64, unit_text: string>, tags: list<item: string>, scores: struct<revertrisk: struct<probability: struct<false: double, true: double>, prediction: bool>>, maintenance_tags: struct<>, noindex: bool>
              to
              {'identifier': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None), 'is_minor_edit': Value(dtype='bool', id=None), 'scores': {'revertrisk': {'probability': {'false': Value(dtype='float64', id=None), 'true': Value(dtype='float64', id=None)}, 'prediction': Value(dtype='bool', id=None)}}, 'editor': {'identifier': Value(dtype='int64', id=None), 'name': Value(dtype='string', id=None), 'edit_count': Value(dtype='int64', id=None), 'groups': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'date_started': Value(dtype='timestamp[s]', id=None), 'is_patroller': Value(dtype='bool', id=None), 'is_bot': Value(dtype='bool', id=None), 'is_admin': Value(dtype='bool', id=None), 'is_anonymous': Value(dtype='bool', id=None), 'has_advanced_rights': Value(dtype='bool', id=None)}, 'number_of_characters': Value(dtype='int64', id=None), 'size': {'value': Value(dtype='int64', id=None), 'unit_text': Value(dtype='string', id=None)}, 'noindex': Value(dtype='bool', id=None), 'maintenance_tags': {'pov_count': Value(dtype='int64', id=None), 'update_count': Value(dtype='int64', id=None), 'citation_needed_count': Value(dtype='int64', id=None)}, 'tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'is_breaking_news': Value(dtype='bool', id=None)}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp>
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2122, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              struct<identifier: int64, comment: string, is_minor_edit: bool, editor: struct<identifier: int64, name: string, is_anonymous: bool, edit_count: int64, groups: list<item: string>, is_patroller: bool, date_started: timestamp[s], is_admin: bool>, number_of_characters: int64, size: struct<value: int64, unit_text: string>, tags: list<item: string>, scores: struct<revertrisk: struct<probability: struct<false: double, true: double>, prediction: bool>>, maintenance_tags: struct<>, noindex: bool>
              to
              {'identifier': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None), 'is_minor_edit': Value(dtype='bool', id=None), 'scores': {'revertrisk': {'probability': {'false': Value(dtype='float64', id=None), 'true': Value(dtype='float64', id=None)}, 'prediction': Value(dtype='bool', id=None)}}, 'editor': {'identifier': Value(dtype='int64', id=None), 'name': Value(dtype='string', id=None), 'edit_count': Value(dtype='int64', id=None), 'groups': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'date_started': Value(dtype='timestamp[s]', id=None), 'is_patroller': Value(dtype='bool', id=None), 'is_bot': Value(dtype='bool', id=None), 'is_admin': Value(dtype='bool', id=None), 'is_anonymous': Value(dtype='bool', id=None), 'has_advanced_rights': Value(dtype='bool', id=None)}, 'number_of_characters': Value(dtype='int64', id=None), 'size': {'value': Value(dtype='int64', id=None), 'unit_text': Value(dtype='string', id=None)}, 'noindex': Value(dtype='bool', id=None), 'maintenance_tags': {'pov_count': Value(dtype='int64', id=None), 'update_count': Value(dtype='int64', id=None), 'citation_needed_count': Value(dtype='int64', id=None)}, 'tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'is_breaking_news': Value(dtype='bool', id=None)}
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1391, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 990, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1884, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2040, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

I am investigating it.

Wikimedia org

Thanks @albertvillanova !

We'll also be looking at it, let us know if we need to change anything.

We will need to define the data types explicitly and avoid their inference.

However, we also need to fix some underlying issue in the datasets library with JSON-lines data files. I opened PRs:

Wikimedia org

Thanks for all the help @albertvillanova !
Looks like the issues are fixed on your end?
We are having a closer look at the schema updates on our end and will get back to you.

Sign up or log in to comment