robotics-diffusion-transformer
commited on
Commit
•
0bcbf23
1
Parent(s):
911aa08
Create README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,108 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
2 |
tags:
|
3 |
-
-
|
4 |
-
-
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
|
|
6 |
|
7 |
-
|
8 |
-
-
|
9 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: robotics
|
6 |
+
library_name: transformers
|
7 |
tags:
|
8 |
+
- robotics
|
9 |
+
- pytorch
|
10 |
+
- multimodal
|
11 |
+
- pretraining
|
12 |
+
- vla
|
13 |
+
- diffusion
|
14 |
+
- rdt
|
15 |
---
|
16 |
+
# RDT-170M
|
17 |
|
18 |
+
![](head.mp4)
|
19 |
+
RDT-170M is a 170M-parameter imitation learning Diffusion Transformer ***(RDT(small) in ablation)***. Given language instruction and RGB images of up to three views, RDT can predict the next
|
20 |
+
64 robot actions. RDT is compatible with almost all modern mobile manipulators, from single-arm to dual-arm, joint to EEF, position to velocity, and even wheeled locomotion.
|
21 |
+
|
22 |
+
All the [code](https://github.com/thu-ml/RoboticsDiffusionTransformer/tree/main?tab=readme-ov-file), pre-trained model weights, and [data](https://huggingface.co/datasets/robotics-diffusion-transformer/rdt-ft-data) are licensed under the MIT license.
|
23 |
+
|
24 |
+
Please refer to our [project page](https://rdt-robotics.github.io/rdt-robotics/) and [paper](https://arxiv.org/pdf/2410.07864) for more information.
|
25 |
+
|
26 |
+
## Model Details
|
27 |
+
|
28 |
+
- **Developed by:** The RDT team consisting of researchers from the [TSAIL group](https://ml.cs.tsinghua.edu.cn/) at Tsinghua University
|
29 |
+
- **Task Type:** Vision-Language-Action (language, image => robot actions)
|
30 |
+
- **Modle Type:** Diffusion Policy with Transformers
|
31 |
+
- **License:** MIT
|
32 |
+
- **Language(s) (NLP):** en
|
33 |
+
- **Multi-Modal Encoders:**
|
34 |
+
- **Vision Backbone:** [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384)
|
35 |
+
- **Language Model:** [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl)
|
36 |
+
- **Pre-Training Datasets:** 46 datasets consisting of [RT-1 Dataset](https://robotics-transformer1.github.io/), [RH20T](https://rh20t.github.io/), [DROID](https://droid-dataset.github.io/), [BridgeData V2](https://rail-berkeley.github.io/bridgedata/), [RoboSet](https://robopen.github.io/roboset/), and a subset of [Open X-Embodiment](https://robotics-transformer-x.github.io/). See [this link](https://github.com/thu-ml/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md#download-and-prepare-datasets) for a detailed list.
|
37 |
+
- **Repository:** https://github.com/thu-ml/RoboticsDiffusionTransformer
|
38 |
+
- **Paper :** https://arxiv.org/pdf/2410.07864
|
39 |
+
- **Project Page:** https://rdt-robotics.github.io/rdt-robotics/
|
40 |
+
|
41 |
+
## Uses
|
42 |
+
|
43 |
+
RDT takes language instruction, RGB images (of up to three views), control frequency (if any), and proprioception as input and predicts the next 64 robot actions.
|
44 |
+
RDT supports control of almost all robot manipulators with the help of the unified action space, which
|
45 |
+
includes all the main physical quantities of the robot manipulator (e.g., the end-effector and joint, position and velocity, and the wheeled locomotion).
|
46 |
+
To deploy on your robot platform, you need to fill the relevant quantities of the raw action vector into the unified space vector. See [our repository](https://github.com/thu-ml/RoboticsDiffusionTransformer) for more information.
|
47 |
+
|
48 |
+
**Out-of-Scope**: Due to the embodiment gap, RDT cannot yet generalize to new robot platforms (not seen in the pre-training datasets).
|
49 |
+
In this case, we recommend collecting a small dataset of the target robot and then using it to fine-tune RDT.
|
50 |
+
See [our repository](https://github.com/thu-ml/RoboticsDiffusionTransformer) for a tutorial.
|
51 |
+
|
52 |
+
Here's an example of how to use the RDT-1B model for inference on a robot:
|
53 |
+
```python
|
54 |
+
# Please first clone the repository and install dependencies
|
55 |
+
# Then switch to the root directory of the repository by "cd RoboticsDiffusionTransformer"
|
56 |
+
|
57 |
+
# Import a create function from the code base
|
58 |
+
from scripts.agilex_model import create_model
|
59 |
+
|
60 |
+
# Names of cameras used for visual input
|
61 |
+
CAMERA_NAMES = ['cam_high', 'cam_right_wrist', 'cam_left_wrist']
|
62 |
+
config = {
|
63 |
+
'episode_len': 1000, # Max length of one episode
|
64 |
+
'state_dim': 14, # Dimension of the robot's state
|
65 |
+
'chunk_size': 64, # Number of actions to predict in one step
|
66 |
+
'camera_names': CAMERA_NAMES,
|
67 |
+
}
|
68 |
+
pretrained_vision_encoder_name_or_path = "google/siglip-so400m-patch14-384"
|
69 |
+
# Create the model with the specified configuration
|
70 |
+
model = create_model(
|
71 |
+
args=config,
|
72 |
+
dtype=torch.bfloat16,
|
73 |
+
pretrained_vision_encoder_name_or_path=pretrained_vision_encoder_name_or_path,
|
74 |
+
pretrained='robotics-diffusion-transformer/rdt-1b',
|
75 |
+
control_frequency=25,
|
76 |
+
)
|
77 |
+
|
78 |
+
# Start inference process
|
79 |
+
# Load the pre-computed language embeddings
|
80 |
+
# Refer to scripts/encode_lang.py for how to encode the language instruction
|
81 |
+
lang_embeddings_path = 'your/language/embedding/path'
|
82 |
+
text_embedding = torch.load(lang_embeddings_path)['embeddings']
|
83 |
+
images: List(PIL.Image) = ... # The images from last 2 frames
|
84 |
+
proprio = ... # The current robot state
|
85 |
+
# Perform inference to predict the next `chunk_size` actions
|
86 |
+
actions = policy.step(
|
87 |
+
proprio=proprio,
|
88 |
+
images=images,
|
89 |
+
text_embeds=text_embedding
|
90 |
+
)
|
91 |
+
```
|
92 |
+
|
93 |
+
<!-- RDT-1B supports finetuning on custom datasets, deploying and inferencing on real robots, and retraining the model.
|
94 |
+
Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides. -->
|
95 |
+
|
96 |
+
|
97 |
+
## Citation
|
98 |
+
|
99 |
+
If you find our work helpful, please cite us:
|
100 |
+
```bibtex
|
101 |
+
@article{liu2024rdt,
|
102 |
+
title={RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation},
|
103 |
+
author={Liu, Songming and Wu, Lingxuan and Li, Bangguo and Tan, Hengkai and Chen, Huayu and Wang, Zhengyi and Xu, Ke and Su, Hang and Zhu, Jun},
|
104 |
+
journal={arXiv preprint arXiv:2410.07864},
|
105 |
+
year={2024}
|
106 |
+
}
|
107 |
+
```
|
108 |
+
Thank you!
|