VAE: Check for timesteps parameter in decoder before calling 8920e2d benibraz commited on 29 days ago
Merge pull request #34 from LightricksResearch/add_atten_to_decoder f63ea56 unverified Yoav HaCohen commited on 30 days ago
Merge pull request #33 from LightricksResearch/compress-all-half-channels 427926d unverified Guy Shiran commited on 30 days ago
causal_video_autoencoder: add option to half channels in depth to space upsample block d5e984f guysrn commited on about 1 month ago
Merge pull request #29 from LightricksResearch/scale_vae_decoder_v1 c4b2a35 unverified origordon commited on Nov 10
Merge pull request #30 from LightricksResearch/fix-no-flash-attention 05cb3e4 unverified Sapir Weissbuch commited on Nov 7
model: fix flash attention enabling - do not check device type at this point (can be CPU) 5940103 erichardson commited on Nov 7
Merge pull request #19 from LightricksResearch/bfloat16-inference 91602f9 unverified Sapir Weissbuch commited on Oct 31
Feature: Add mixed precision support and direct bfloat16 support. 1940326 daniel shalem commited on Oct 31
VAE: Support different latent_var_log options when returning intermediate features for 3D perceptual loss 7d89bb0 erichardson commited on Oct 31
VAE: Support retuning intermediate features for 3d perceptual loss 028b6a1 erichardson commited on Oct 30
VAE: Support more configurations for Encoder and Decoder blocks 43d3c68 erichardson commited on Oct 20
Merge pull request #7 from LightricksResearch/feature/fix-transformer-init-bug fc02e02 unverified Dudu Moshe commited on Oct 8
transformer3d: init mode xora never happens because lower case needed. a3498bb dudumoshe commited on Oct 8