ViCLIP-L-14-hf / README.md
nanamma's picture
init
cb0dd3f
|
raw
history blame
206 Bytes
metadata
datasets:
  - OpenGVLab/InternVid
base_model:
  - openai/clip-vit-large-patch14
tags:
  - ViCLIP

huggingface weight of ViCLIP

remember to set your tokenizer_path in config.json

usage is in demo.ipynb