img2art-search / README.md
brunorosilva
docs: improve docs
de07734
|
raw
history blame
4.27 kB

Image-to-Art Search πŸ”

"Find real artwork that looks like your images"

This project fine-tunes a Vision Transformer (ViT) model, pre-trained with "google/vit-base-patch32-224-in21k" weights and fine tuned with the style of ArtButMakeItSports, to perform image-to-art search across 81k artworks made available by WikiArt.

horse

Table of Contents

Overview

This project leverages the Vision Transformer (ViT) model architecture for the task of image-to-art search. By fine-tuning the pre-trained ViT model on a custom dataset derived from the Instagram account ArtButMakeItSports, we aim to create a model capable of matching images (but not only) to corresponding artworks, being able to search for any of the images on WikiArt.

Installation

  1. Clone the repository:
git clone https://github.com/brunorosilva/img2art-search.git
cd img2art-search
  1. Install poetry:
pip install poetry
  1. Install using poetry:
poetry install

How it works

Dataset Preparation

  1. Download images from the ArtButMakeItSports Instagram account.
  2. Organize the images into appropriate directories for training and validation.
  3. Get a fine tuned model
  4. Create the gallery using WikiArt

Training

Fine-tune the ViT model:

make train

Inference via Gradio

Perform image-to-art search using the fine-tuned model:

make viz

Recreate the wikiart gallery

make wikiart

Create new gallery

If you want to index new images to search, use:

poetry run python main.py gallery --gallery_path <your_path>

Dataset

The dataset derives from 1k images from the Instagram account ArtButMakeItSports. Images are downloaded and split into training, validation and test sets. Each image is paired with its corresponding artwork for training purposes, if you want this dataset just ask me stating your usage.

WikiArt is indexed using the same process, except that there's no expected result. So each artwork is mapped to itself and the model is used as a feature extractor and the gallery embeddings are saved as a numpy file (will be changed to chromadb in the future).

Training

The training script fine-tunes the ViT model on the prepared dataset. Key steps include:

  1. Loading the pre-trained "google/vit-base-patch32-224-in21k" weights.
  2. Preparing the dataset and data loaders.
  3. Fine-tuning the model using a custom training loop.
  4. Saving the model to the results folder

Interface

The recommended method to get results is to use gradio as an interface by running make viz. This will open a server and you can use some image you want to search or even use your webcam to get top 4 search results.

Examples

Search for contextual similarity field

Search for shapes similarity basket

Search for expression similarity (yep, that's me) serious_face

Search for pose similarity lawyer

Search for an object horse

Contributing

There are three topics I'd appreciate help with:

  1. Increasing the gallery by embedding new painting datasets, the current one has 81k artworks because I just got a ready to go dataset, but the complete WikiArt catalog alone has 250k+ artworks, so I really want to up this number to a least 300k in the near future;
  2. Porting the encoding and search to a vector db, preferably chromadb;
  3. Open issues with how this could be improved, new ideas will be considered.

License

The source code for the site is licensed under the MIT license, which you can find in the MIT-LICENSE.txt file.

All graphical assets are licensed under the Creative Commons Attribution 3.0 Unported License.