liruiw commited on
Commit
64cfd85
1 Parent(s): dec101d

Push model using huggingface_hub.

Browse files
Files changed (3) hide show
  1. README.md +6 -18
  2. config.json +18 -0
  3. model.safetensors +3 -0
README.md CHANGED
@@ -1,21 +1,9 @@
1
  ---
2
- license: mit
 
 
3
  ---
4
- **Scaling Robot Learning with Heterogeneous Pre-training**
5
 
6
- **Abstract**
7
-
8
- One of the key roadblocks for training generalist robotic models today is heterogeneity. Previous robot learning methods often collect data to train with one specific embodiment for one task, which is expensive and prone to overfitting. This work studies the problem of learning policy representations through heterogeneous pre-training on robot data across different embodiments and tasks at scale. We propose Heterogeneous Pre-trained Transformers (HPT), which pre-train a large, shareable trunk of a policy neural network to learn a task and embodiment agnostic shared representation. This general architecture aligns the specific proprioception and vision inputs from distinct embodiments to a short sequence of tokens and then processes such tokens to map to control robots for different tasks. Leveraging the recent large-scale multi-embodiment real-world robotic datasets as well as simulation and human video datasets, we show that pre-training robotic policies across heterogeneity can exhibit compelling scaling behaviors, to the extent of 52 distinct datasets and 1 billion parameter models. Pre-trained HPTs outperform previous methods and enhance the finetuned policy performance on unseen downstream tasks and environments in simulator benchmarks and real-world settings.
9
-
10
- **Citation**
11
-
12
- @inproceedings{wang2024hpt,
13
- author = {Lirui Wang, Xinlei Chen, Jialiang Zhao, Russ Tedrake, Kaiming He},
14
- title = {Scaling Robot Learning with Heterogeneous Pre-training},
15
- booktitle = {Arxiv},
16
- year = {2024}
17
- }
18
-
19
- **Contact**
20
-
21
- Lirui Wang (liruiw@mit.edu)
 
1
  ---
2
+ tags:
3
+ - model_hub_mixin
4
+ - pytorch_model_hub_mixin
5
  ---
 
6
 
7
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
+ - Library: [More Information Needed]
9
+ - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "action_horizon": 8,
3
+ "drop_path": 0.1,
4
+ "embed_dim": 256,
5
+ "mae_loss_scale": 0.0,
6
+ "masked_autoencoding": false,
7
+ "no_trunk": false,
8
+ "num_blocks": 16,
9
+ "num_heads": 8,
10
+ "observation_horizon": 4,
11
+ "proprioception_expand": false,
12
+ "proprioception_expand_dim": 1,
13
+ "shared_modality_trunk": null,
14
+ "token_postprocessing": "mean",
15
+ "use_domain_embedding": false,
16
+ "use_modality_embedding": true,
17
+ "weight_init_style": "pytorch"
18
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c7709a05b512c244121e733626d8b8a67ce77aebb9b04dbc7199ed78f79a8bb
3
+ size 50600888