[ { "path": "table_paper/2407.00017v1.json", "table_id": "1", "section": "5", "all_context": [ "To convert between CityJSON and CityJSONSeq files (and vice-versa), we have developed the open-source software cjseq, which is available at https://github.com/cityjson/cjseq/ under a permissive open-source license.", "The command-line program handles the conversion not only of the geometries, but also of the materials, the textures, and the geometry templates that the dataset could contain.", "It includes three sub-commands: cat: CityJSON CityJSONSeq; collect: CityJSONSeq CityJSON; filter: to filter city objects in a CityJSONSeq, randomly or based on a bounding box.", "It should be observed that the conversion is an efficient process: the rather large dataset Helskinki from Table 1 , which contains more than \\qty77000 buildings and whose CityJSON file is \\qty572\\mega, takes only \\qty4.7sec to be converted to a CityJSONSeq file, and the reverse operation takes \\qty5.7sec (on a standard laptop).", "" ], "target_context_ids": [ 3 ], "selected_paragraphs": [ "[paragraph id = 3] It should be observed that the conversion is an efficient process: the rather large dataset Helskinki from Table 1 , which contains more than \\qty77000 buildings and whose CityJSON file is \\qty572\\mega, takes only \\qty4.7sec to be converted to a CityJSONSeq file, and the reverse operation takes \\qty5.7sec (on a standard laptop)." ], "table_html": "
\n
Table 1: The datasets used for the benchmark.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
datasetsize of filevertices
CityObjects\napp.\nCityJSONCityJSONSeq\ncompr.\ntotal\nlargest\n\nshared\n
3DBAG\n\\qty1110 bldgs\n\n\\qty6.7\\mega\n\n\\qty5.9\\mega\n12%0.1%
3DBV\n\\qty71634 misc\n\n\\qty378\\mega\n\n\\qty317\\mega\n16%21.0%
Helsinki\n\\qty77231 bldgs\n\n\\qty572\\mega\n\n\\qty412\\mega\n28%0.0%
Helsinki_tex\n\\qty77231 bldgs\ntex\n\\qty713\\mega\n\n\\qty644\\mega\n10%0.0%
Ingolstadt\n\\qty55 bldgs\n\n\\qty4.8\\mega\n\n\\qty3.8\\mega\n25%0.0%
Montréal\n\\qty294 bldgs\ntex\n\\qty5.4\\mega\n\n\\qty4.6\\mega\n15%2.0%
NYC\n\\qty23777 bldgs\n\n\\qty105\\mega\n\n\\qty95\\mega\n10%0.8%
Railway\n\\qty50 misc\ntex+mat\n\\qty4.3\\mega\n\n\\qty4.0\\mega\n8%0.4%
Rotterdam\n\\qty853 bldgs\ntex\n\\qty2.6\\mega\n\n\\qty2.7\\mega\n-4%20.0%
Vienna\n\\qty307 bldgs\n\n\\qty5.4\\mega\n\n\\qty4.8\\mega\n11%0.0%
Zürich\n\\qty52834 bldgs\n\n\\qty279\\mega\n\n\\qty247\\mega\n11%2.6%
\n
\n
\n
\n
    \n
  • \n\n
    \n

    appearance: ‘tex’ is textures stored; ‘mat’ is material stored

    \n
    \n
  • \n
  • \n\n
    \n

    compression factor is

    \n
    \n
  • \n
  • \n\n
    \n

    number of vertices in the largest feature of the stream

    \n
    \n
  • \n
  • \n\n
    \n

    percentage of vertices that are used to represent different city objects

    \n
    \n
  • \n
\n
\n
\n
", "perturb_sentence_id": [ 3, 4 ], "output": { "perturbed_statement": "[paragraph id = 3] It should be observed that the conversion is an efficient process: the rather large dataset Helskinki from Table 1, which contains more than \nqty77000 buildings and whose CityJSON file is \nqty772\beta, takes only \nqty4.7sec to be converted to a CityJSONSeq file, and the reverse operation takes \nqty5.7sec (on a standard laptop).", "perturbed_explanation": "1. The original statement reflects the dataset Helskinki, containing more than 77,000 buildings, is efficiently converted from a CityJSON file of size 572 MB to a CityJSONSeq file in 4.7 seconds, and the reverse operation takes 5.7 seconds using a standard laptop.\n2. The statement mentions the dataset Helskinki, which indeed contains over 77,000 buildings. However, it incorrectly states the size of the CityJSON file as 772 MB instead of the correct 572 MB. This is important because 200 MB of size difference could imply a higher storage need than actually required, affecting the perceived efficiency and storage considerations for similar datasets." } } ]