--- license: cc-by-4.0 task_categories: - image-classification - image-to-text - text-to-image language: - en tags: - medical - images - computer vision - multimodal - text - clinical - nlp pretty_name: MultiCaRe Dataset --- The dataset contains multi-modal data from over 75,000 open access and de-identified case reports, including metadata, clinical cases, image captions and more than 130,000 images. Images and clinical cases belong to different medical specialties, such as oncology, cardiology, surgery and pathology. The structure of the dataset allows to easily map images with their corresponding article metadata, clinical case, captions and image labels. Details of the data structure can be found in the file data_dictionary.csv. Almost 100,000 patients and almost 400,000 medical doctors and researchers were involved in the creation of the articles included in this dataset. The citation data of each article can be found in the metadata.parquet file. Refer to the examples showcased in [this GitHub repository](https://github.com/mauro-nievoff/MultiCaRe_Dataset) to understand how to optimize the use of this dataset. For a detailed insight about the contents of this dataset, please refer to [this data article](https://www.sciencedirect.com/science/article/pii/S2352340923010351) published in Data In Brief. The dataset is also available on [Zenodo](https://zenodo.org/records/10079370).