--- license: cc-by-nc-4.0 language: - en pretty_name: Stark tags: - multi-modal dialogue annotation_creators: - machine-generated task_ids: - conversational dataset_size: 93.6K --- # Dataset Card for Stark [🏠 Homepage](https://stark-dataset.github.io/) | [💻 Github](https://github.com/passing2961/Stark) | [📄 Arxiv](https://arxiv.org/abs/2407.03958) | [📕 PDF](https://arxiv.org/pdf/2407.03958) ## List of Provided Model Series - **Ultron-Summarizer-Series:** [🤖 Ultron-Summarizer-1B](https://huggingface.co/passing2961/Ultron-Summarizer-1B) | [🤖 Ultron-Summarizer-3B](https://huggingface.co/passing2961/Ultron-Summarizer-3B) | [🤖 Ultron-Summarizer-8B](https://huggingface.co/passing2961/Ultron-Summarizer-8B) - **Ultron 7B**: [🤖 Ultron-7B](https://huggingface.co/passing2961/Ultron-7B) > 🚨 Disclaimer: All models and datasets are intended for research purposes only. ## Dataset Description - **Repository:** [Code](https://github.com/passing2961/Stark) - **Paper:** [Stark: Social Long-Term Multi-Modal Conversation with Persona Commonsense Knowledge](https://arxiv.org/abs/2407.03958) - **Point of Contact:** [Young-Jun Lee](mailto:yj2961@kaist.ac.kr) ## Dataset Summary **Stark** is a publicly available, large-scale, long-term multi-modal conversation dataset that encompasses a diverse range of social personas, multi-modality formats, time intervals, and images. To automatically construct Stark, we introduce a novel multi-modal contextualization framework, **MCU**, which generates long-term multi-modal dialogues distilled from ChatGPT and our proposed **Plan-and-Execute Image Aligner**. An overview of MCU and an example from Stark are illustrated below. ![MCU Pipeline](stark_mcu_overview.PNG) This repository contains virtual human face images generated from text-to-image generative model (i.e., [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning)). The virtual human face is created based on a predefined human attribute collection from [CosmicMan](https://openaccess.thecvf.com/content/CVPR2024/papers/Li_CosmicMan_A_Text-to-Image_Foundation_Model_for_Humans_CVPR_2024_paper.pdf), with the full human attribute information presented in the Appendix A.3 of our paper. ## Dataset Structure Since the number of images is large (roughly 899K), we store and provide the image datasets in WebDataset format for efficiency. | Field | Type | Description | |---------|--------|--------------| | `key` | str | A unique identifier for each data entry in the dataset. | | `url` | str | The URL path to the image stored in the dataset repository on HuggingFace. All URLs point to the base HuggingFace repository where images are stored. | | `jpg` | image | The image data associated with each entry, displayed as a thumbnail in the dataset viewer. This column stores the actual image content relevant to the conversation in the dataset. | | `json` | dict | Contains additional metadata and information for each image, structured as a dictionary. The JSON field typically includes the following keys: `image_source`, `image_url`, `face_description`, and `index`. | - **`image_source`**: Source of the image (`t2i`), where: - `"t2i"`: Image from a general text-to-image generative model (i.e., [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning)). - **`image_url`**: External URL where the image was originally sourced. In this dataset, the `image_url` is always an empty string. - **`face_description`**: The virtual human face description that is created based on a predefined human attribute collection from [CosmicMan](https://openaccess.thecvf.com/content/CVPR2024/papers/Li_CosmicMan_A_Text-to-Image_Foundation_Model_for_Humans_CVPR_2024_paper.pdf). - **`index`**: A unique index identifier for each image within the dataset, which is the same as the `key` field. ### Data Instance The following is an example instance from the dataset. ```json { "__key__": "14c9849f-ec2b-4b96-9dae-71645106c8ab", "__url__": "hf://datasets/passing2961/stark-face-image@c3a0b98979522e79174676583528c3c2f54741a8/stark-train-000000-of-000002.tar", "jpg": "", "json": { "face_description": "A nearly full-body shot, a 85-years-old female from Japan, fit, a brown wall, ponytail blonde above chest hair, off-shoulder short scoop neckline sleeveless silk graphic dress, striped green scoop neckline normal sleeveless cotton graphic camisole, cotton graphic tie.", "image_source": "t2i", "image_url": "", "index": "14c9849f-ec2b-4b96-9dae-71645106c8ab" } } ``` ## Dataset Construction We construct the **Stark** dataset using our proposed novel framework, MCU, which distills long-term multi-modal dialogue from ChatGPT and our proposed Plan-and-Execute Image Aligner, powered by a personalized text-to-image generative model (i.e., PhotoMaker), image database retrieval, and web search. All prompt templates used for dataset construction are presented in the Appendix of our paper. ## Languages - English ## Further Details and Limitations For additional information and limitations, please refer to our [paper](https://arxiv.org/abs/2407.03958). ## License and Recommendations The **Stark** dataset is intended for research purposes only. Despite our efforts to generate high-quality and diverse personalized images, users should be mindful of ethical considerations when utilizing the dataset. ## Acknowledgements This work was supported by a grant from the KAIST-KT joint research project through AI Tech Lab, Institute of Convergence Technology, funded by KT [Project No. G01230605, Development of Task-oriented Persona-based Dialogue Generation Combining Multi-modal Interaction and Knowledge Modeling]. ## Citation If you find the resources in this repository useful, please cite our work: ``` @article{lee2024stark, title={Stark: Social Long-Term Multi-Modal Conversation with Persona Commonsense Knowledge}, author={Lee, Young-Jun and Lee, Dokyong and Youn, Junyoung and Oh, Kyeongjin and Ko, Byungsoo and Hyeon, Jonghwan and Choi, Ho-Jin}, journal={arXiv preprint arXiv:2407.03958}, year={2024} } ```