Eric Lau
andywu-kby
AI & ML interests
None yet
Recent Activity
reacted
to
jasoncorkill's
post
with π
about 12 hours ago
Do you remember https://thispersondoesnotexist.com/ ? It was one of the first cases where the future of generative media really hit us. Humans are incredibly good at recognizing and analyzing faces, so they are a very good litmus test for any generative image model.
But none of the current benchmarks measure the ability of models to generate humans independently. So we built our own. We measure the models ability to generate a diverse set of human faces and using over 20'000 human annotations we ranked all of the major models on their ability to generate faces. Find the full ranking here:
https://app.rapidata.ai/mri/benchmarks/68af24ae74482280b62f7596
We have release the full underlying data publicly here on huggingface: https://huggingface.co/datasets/Rapidata/Face_Generation_Benchmark
reacted
to
Locutusque's
post
with π₯
about 12 hours ago
π AutoXLA - Accelerating Large Models on TPU
AutoXLA is an experimental library that automates the distribution, optimization, and quantization of large language models for TPUs using PyTorch/XLA. It extends the Hugging Face Transformers interface with TPU-aware features such as automatic sharding, custom attention kernels, and quantization-aware loading, making large-scale deployment and training both simpler and faster.
With quantization and Splash Attention kernels, AutoXLA achieves up to 4Γ speedups over standard Flash Attention implementations, significantly improving throughput for both inference and training workloads.
Whether youβre experimenting with distributed setups (FSDP, 2D, or 3D sharding) or optimizing memory via LanguageModelQuantizer, AutoXLA is built to make scaling LLMs on TPU seamless.
β οΈ Note: This is an experimental repository. Expect rough edges! Please report bugs or unexpected behavior through GitHub issues.
π GitHub Repository: https://github.com/Locutusque/AutoXLA