andylolu24/ollm-wikipedia
Text Generation
•
Updated
•
66
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 25 new columns ({'v', 'showprefixforinfo', 'logtostderr', 'split_prop', 'helpfull', 'help', 'only_check_args', 'profile_file', '?', 'helpxml', 'helpshort', 'run_with_profiling', 'log_dir', 'graph_file', 'output_dir', 'alsologtostderr', 'seed', 'split_depth', 'verbosity', 'run_with_pdb', 'stderrthreshold', 'pdb', 'logger_levels', 'pdb_post_mortem', 'use_cprofile_for_profiling'}) and 5 missing columns ({'links', 'multigraph', 'nodes', 'directed', 'graph'}). This happened while the json dataset builder was generating data using hf://datasets/andylolu24/wiki-ol/train_eval_split/train_test_split_flags.json (at revision f5e403e056eb39173b37ec494f14bbd7ffaf021c) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast logtostderr: bool alsologtostderr: bool log_dir: string v: int64 verbosity: int64 logger_levels: struct<> stderrthreshold: string showprefixforinfo: bool run_with_pdb: bool pdb_post_mortem: bool pdb: bool run_with_profiling: bool profile_file: null use_cprofile_for_profiling: bool only_check_args: bool graph_file: string output_dir: string split_depth: int64 split_prop: double seed: int64 ?: bool help: bool helpshort: bool helpfull: bool helpxml: bool to {'directed': Value(dtype='bool', id=None), 'multigraph': Value(dtype='bool', id=None), 'graph': {'root': Value(dtype='int64', id=None)}, 'nodes': [{'title': Value(dtype='string', id=None), 'pages': [{'id': Value(dtype='int64', id=None), 'title': Value(dtype='string', id=None), 'abstract': Value(dtype='string', id=None)}], 'id': Value(dtype='int64', id=None)}], 'links': [{'source': Value(dtype='int64', id=None), 'target': Value(dtype='int64', id=None)}]} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1412, in compute_config_parquet_and_info_response parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet( File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 988, in stream_convert_to_parquet builder._prepare_split( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 25 new columns ({'v', 'showprefixforinfo', 'logtostderr', 'split_prop', 'helpfull', 'help', 'only_check_args', 'profile_file', '?', 'helpxml', 'helpshort', 'run_with_profiling', 'log_dir', 'graph_file', 'output_dir', 'alsologtostderr', 'seed', 'split_depth', 'verbosity', 'run_with_pdb', 'stderrthreshold', 'pdb', 'logger_levels', 'pdb_post_mortem', 'use_cprofile_for_profiling'}) and 5 missing columns ({'links', 'multigraph', 'nodes', 'directed', 'graph'}). This happened while the json dataset builder was generating data using hf://datasets/andylolu24/wiki-ol/train_eval_split/train_test_split_flags.json (at revision f5e403e056eb39173b37ec494f14bbd7ffaf021c) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
directed
bool | multigraph
bool | graph
dict | nodes
list | links
list | logtostderr
bool | alsologtostderr
bool | log_dir
string | v
int64 | verbosity
int64 | logger_levels
dict | stderrthreshold
string | showprefixforinfo
bool | run_with_pdb
bool | pdb_post_mortem
bool | pdb
bool | run_with_profiling
bool | profile_file
null | use_cprofile_for_profiling
bool | only_check_args
bool | graph_file
string | output_dir
string | split_depth
int64 | split_prop
float64 | seed
int64 | ?
bool | help
bool | helpshort
bool | helpfull
bool | helpxml
bool |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
true | false | {
"root": 7345184
} | [{"title":"Anti-torture laws","pages":[{"id":1292624,"title":"Istanbul Protocol","abstract":"The Man(...TRUNCATED) | [{"source":11034627,"target":14236835},{"source":4595727,"target":1215289},{"source":8175652,"target(...TRUNCATED) | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
true | false | {
"root": 7345184
} | [{"title":"Economic impact of the COVID-19 pandemic","pages":[{"id":63926963,"title":"Economic impac(...TRUNCATED) | [{"source":19169336,"target":1475382},{"source":42631223,"target":37579209},{"source":58425407,"targ(...TRUNCATED) | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
null | null | null | null | null | false | false | 1 | 1 | {} | info | true | false | false | false | false | null | true | false | out/data/wikipedia/v2/train_test_split/train_graph.json | out/data/wikipedia/v2/train_eval_split | 1 | 0.3 | 0 | false | false | false | false | false |
|
true | false | {
"root": 7345184
} | [{"title":"Upper houses","pages":[{"id":538644,"title":"Upper house","abstract":"An upper house is o(...TRUNCATED) | [{"source":10158091,"target":69275919},{"source":10158091,"target":53263803},{"source":28082200,"tar(...TRUNCATED) | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
true | false | {
"root": 7345184
} | [{"title":"Economic impact of the COVID-19 pandemic","pages":[{"id":63926963,"title":"Economic impac(...TRUNCATED) | [{"source":64946193,"target":63443082},{"source":32669751,"target":30579194},{"source":19169336,"tar(...TRUNCATED) | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
null | null | null | null | null | false | false | 1 | 1 | {} | info | true | false | false | false | false | null | true | false | out/data/wikipedia/v2/full/graph_depth_3.json | out/data/wikipedia/v2/train_test_split | 1 | 0.5 | null | false | false | false | false | false |
Repository: https://github.com/andylolu2/ollm
train_eval_split/train_graph.json
train_eval_split/test_graph.json
train_test_split/test_graph.json
Load the graph with networkx
:
import networkx as nx
with open(path_to_graph, "r") as f:
G = nx.node_link_graph(json.load(f), edges="links")