Edit model card

commaVQ - GPT2M

A GPT2M model trained on a larger version of the commaVQ dataset.

This model is able to generate driving video unconditionally.

Below is an example of 5 seconds of imagined video using GPT2M.

Downloads last month
18
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Dataset used to train commaai/commavq-gpt2m

Space using commaai/commavq-gpt2m 1