
company
Verified
AI & ML interests
Google ❤️ Open Source AI
Recent Activity
View all activity
Articles
Organization Card
Welcome to the official Google organization on Hugging Face!
Google collaborates with Hugging Face across open science, open source, cloud, and hardware to enable companies to innovate with AI on Google Cloud AI services and infrastructure with the Hugging Face ecosystem.
Featured Models and Tools
- Gemma Family of Open Multimodal Models
- Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models
- PaliGemma is a versatile and lightweight vision-language model (VLM)
- CodeGemma is a collection of lightweight open code models built on top of Gemma
- RecurrentGemma is a family of open language models built on a novel recurrent architecture developed at Google
- ShieldGemma is a series of safety content moderation models built upon Gemma 2 that target four harm categories
- Health AI Developer Foundations
- MedGemma collection of open models for medical image and text comprehension to accelerate building healthcare-based AI applications
- TxGemma collection of open models to accelerate the development of therapeutics
- CXR Foundation embedding model for efficiently building AI for chest X-ray applications
- Path Foundation embedding model for efficiently building AI for histopathology applications
- Derm Foundation embedding model for efficiently building AI for skin imaging applications
- HeAR (TensorFlow, PyTorch) embedding model for efficiently building AI related to audio originating from the respiratory system
- BERT, T5, and TimesFM Model Families
- Author ML models with MaxText, JAX, Keras, Tensorflow, and PyTorch/XLA
- SynthID is a Google DeepMind technology that watermarks and identifies AI-generated content (🤗 Space)
Open Research and Community Resources
- Google Blogs:
- Notable GitHub Repositories:
- https://github.com/google/jax is a Python library for high-performance numerical computing and machine learning
- https://github.com/huggingface/Google-Cloud-Containers facilitate the training and deployment of Hugging Face models on Google Cloud
- https://github.com/pytorch/xla enables PyTorch on XLA Devices (e.g. Google TPU)
- https://github.com/huggingface/optimum-tpu brings the power of TPUs to your training and inference stack
- https://github.com/openxla/xla is a machine learning compiler for GPUs, CPUs, and ML accelerators
- https://github.com/google/JetStream (and https://github.com/google/jetstream-pytorch) is a throughput and memory optimized engine for large language model (LLM) inference on XLA devices
- https://github.com/google/flax is a neural network library for JAX that is designed for flexibility
- https://github.com/kubernetes-sigs/lws facilitates Kubernetes deployment patterns for AI/ML inference workloads, especially multi-host inference workloads
- https://gke-ai-labs.dev/ is a collection of AI examples, best-practices, and prebuilt solutions
- Google Research Papers: https://research.google/
On-device ML using Google AI Edge
- Customize and run common ML Tasks with low-code MediaPipe Solutions
- Run pretrained or custom models on-device with Lite RT (previously known as TensorFlow Lite)
- Convert TensorFlow and JAX models to LiteRT
- Convert PyTorch models to LiteRT and author high performance on-device LLMs with AI Edge Torch
- Visualize and debug models with Model Explorer (🤗 Space)
Partnership Highlights and Resources
- Select Google Cloud CPU, GPU, or TPU options when setting up your Hugging Face Inference Endpoints and Spaces
- Train and Deploy Hugging Face models on Google Kubernetes Engine (GKE) and Vertex AI directly from Hugging Face model landing pages or from Google Cloud Model Garden
- Integrate Colab notebooks with Hugging Face Hub via the HF_TOKEN secret manager integration and transformers/huggingface_hub pre-installs
- Leverage Hugging Face Deep Learning Containers (DLCs) for easy training and deployment of Hugging Face models on Google Cloud infrastructure
- Run optimized, zero-configuration inference microservices with Hugging Face Generative AI Services (HUGS) via the Google Cloud Marketplace
Read about our principles for responsible AI at https://ai.google/responsibility/principles
Collections
37
spaces
9
Running
79
MedGemma - Radiology Explainer Demo
🩺
Radiology Image & Report Explainer Demo. Built with MedGemma
Running
9
Path Foundation Demo
🔬
Access pathology images for medical reference
Running
5
CXR Foundation Demo
🩻
Demo usage of the CXR Foundation model embeddings
Running
44
Compare Siglip1 Siglip2
🚀
Compare SigLIP1 and SigLIP2 on zero shot classification
Running
on
Zero
90
Paligemma2 Mix
🌖
Generate text or segment objects from an image
Running
on
CPU Upgrade
1.97k
Stable Diffusion XL on TPUv5e
🏋
Generate images from text prompts with various styles
models
997

google/svq
Updated
•
2

google/medgemma-27b-text-it
Image-Text-to-Text
•
Updated
•
4.99k
•
135

google/medgemma-4b-pt
Image-Text-to-Text
•
Updated
•
636
•
62

google/medgemma-4b-it
Image-Text-to-Text
•
Updated
•
8.49k
•
179

google/gemma-3n-E2B-it-litert-preview
Image-Text-to-Text
•
Updated
•
133

google/gemma-3n-E4B-it-litert-preview
Image-Text-to-Text
•
Updated
•
445

google/gemma-scope-2b-pt-transcoders
Updated
•
5

google/hear-pytorch
Image Feature Extraction
•
Updated
•
123
•
10

google/tapnet
Updated
•
6

google/hear
Updated
•
93
•
20
datasets
54
google/svq
Preview
•
Updated
•
60
•
1
google/wmt24pp
Viewer
•
Updated
•
54.9k
•
4.22k
•
40
google/smol
Viewer
•
Updated
•
811k
•
937
•
53
google/wmt24pp-images
Viewer
•
Updated
•
170
•
56
•
4
google/spiqa
Viewer
•
Updated
•
666
•
376
•
37
google/FACTS-grounding-public
Viewer
•
Updated
•
868
•
326
•
28
google/frames-benchmark
Viewer
•
Updated
•
824
•
2.34k
•
206
google/flame-collection
Updated
•
15
•
1
google/xtreme_s
Updated
•
10.3k
•
62
google/coverbench
Viewer
•
Updated
•
733
•
51
•
10