mlfu7 commited on
Commit
c8ef7a8
1 Parent(s): 2c1132b

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -5,6 +5,6 @@ by <a href="https://max-fu.github.io">Max (Letian) Fu*</a>, <a href="https://qin
5
 
6
  This repo contains the checkpoints for *In-Context Imitation Learning via Next-Token Prediction*. We investigate how to bring few-shot, in-context learning capability that exists in next-token prediction models (i.e. GPT) into real-robot imitation learning policies.
7
 
8
- In particular, we store the pre-trained vision encoder and ICRT model separately. Please find them in [encoder](crossmae_rtx/cross-mae-rtx-vitb.pth) and [ICRT](icrt_vitb_droid_pretrained/icrt_vitb_droid_pretrained.pth) separately.
9
 
10
  Please refer to the [project page](https://github.com/Max-Fu/icrt) on installing the repo, training and inferencing the model.
 
5
 
6
  This repo contains the checkpoints for *In-Context Imitation Learning via Next-Token Prediction*. We investigate how to bring few-shot, in-context learning capability that exists in next-token prediction models (i.e. GPT) into real-robot imitation learning policies.
7
 
8
+ In particular, we store the pre-trained vision encoder and ICRT model separately. Please find them in [encoder](crossmae_rtx/cross-mae-rtx-vitb.pth), [ICRT](icrt_vitb_droid_pretrained/icrt_vitb_droid_pretrained.pth), and [ICRT-Llama7B](icrt_llama7b_lora/icrt_llama7b_lora.pth).
9
 
10
  Please refer to the [project page](https://github.com/Max-Fu/icrt) on installing the repo, training and inferencing the model.