File size: 1,424 Bytes
19c6069
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
license: mit
---
# RDT-1B

RDT-1B is a 1B-parameter (largest to date) imitation learning Diffusion Transformer pre-trained on 1M+ (largest to date) multi-robot episodes. Given a language instruction and 3-view RGB image observations, RDT can predict the next 
64 robot actions. RDT is inherently compatible with almost all kinds of modern mobile manipulators, from single-arm to dual-arm, joint to EEF, pos. to vel., and even with a mobile chassis.

<!-- ## Model Details
 -->
<!-- ### Model Description

- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
 -->
 
### Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** [More Information Needed]
- **Paper :** [More Information Needed]
- **Project Page:** https://rdt-robotics.github.io/rdt-robotics/

## Uses

RDT-1B supports finetuning on custom dataset, deploying and inferencing on real-robots, and pre-training with large scale datasets.

Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides.


## Citation 

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]