Update README.md
Browse filesPublish the initial version of the Google Organization Card, in collaboration with multiple teams at Google and Hugging Face
README.md
CHANGED
@@ -6,5 +6,55 @@ colorTo: red
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
|
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
+
![Hugging Face x Google Cloud](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/google-cloud/thumbnail.png)
|
10 |
|
11 |
+
*Welcome to the official Google organization on Hugging Face\!*
|
12 |
+
|
13 |
+
[Google collaborates with Hugging Face](https://huggingface.co/blog/gcp-partnership) across open science, open source, cloud, and hardware to **enable companies to innovate with AI** [on Google Cloud AI services and infrastructure with the Hugging Face ecosystem](https://huggingface.co/docs/google-cloud/main/en/index).
|
14 |
+
|
15 |
+
## Featured Models and Tools
|
16 |
+
|
17 |
+
* **Gemma Family of Open Multimodal Models**
|
18 |
+
* **Gemma** is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models
|
19 |
+
* **PaliGemma** is a versatile and lightweight vision-language model (VLM)
|
20 |
+
* **CodeGemma** is a collection of lightweight open code models built on top of Gemma
|
21 |
+
* **RecurrentGemma** is a family of open language models built on a novel recurrent architecture developed at Google
|
22 |
+
* **ShieldGemma** is a series of safety content moderation models built upon Gemma 2 that target four harm categories
|
23 |
+
* **[BERT](https://huggingface.co/collections/google/bert-release-64ff5e7a4be99045d1896dbc), [T5](https://huggingface.co/collections/google/t5-release-65005e7c520f8d7b4d037918), and [TimesFM](https://github.com/google-research/timesfm) Model Families**
|
24 |
+
* **Author ML models with [MaxText](https://github.com/google/maxtext), [JAX](https://github.com/google/jax), [Keras](https://github.com/keras-team/keras), [Tensorflow](https://github.com/tensorflow/tensorflow), and [PyTorch/XLA](https://github.com/pytorch/xla)**
|
25 |
+
|
26 |
+
## Open Research and Community Resources
|
27 |
+
|
28 |
+
* **Google Blogs**:
|
29 |
+
* [https://blog.google/](https://blog.google/)
|
30 |
+
* [https://cloud.google.com/blog/](https://cloud.google.com/blog/)
|
31 |
+
* [https://deepmind.google/discover/blog/](https://deepmind.google/discover/blog/)
|
32 |
+
* [https://developers.google.com/learn?category=aiandmachinelearning](https://developers.google.com/learn?category=aiandmachinelearning)
|
33 |
+
* **Notable GitHub Repositories**:
|
34 |
+
* [https://github.com/google/jax](https://github.com/google/jax) is a Python library for high-performance numerical computing and machine learning
|
35 |
+
* [https://github.com/huggingface/Google-Cloud-Containers](https://github.com/huggingface/Google-Cloud-Containers) facilitate the training and deployment of Hugging Face models on Google Cloud
|
36 |
+
* [https://github.com/pytorch/xla](https://github.com/pytorch/xla) enables PyTorch on XLA Devices (e.g. Google TPU)
|
37 |
+
* [https://github.com/huggingface/optimum-tpu](https://github.com/huggingface/optimum-tpu) brings the power of TPUs to your training and inference stack
|
38 |
+
* [https://github.com/openxla/xla](https://github.com/openxla/xla) is a machine learning compiler for GPUs, CPUs, and ML accelerators
|
39 |
+
* [https://github.com/google/JetStream](https://github.com/google/JetStream) (and [https://github.com/google/jetstream-pytorch](https://github.com/google/jetstream-pytorch)) is a throughput and memory optimized engine for large language model (LLM) inference on XLA devices
|
40 |
+
* [https://github.com/google/flax](https://github.com/google/flax) is a neural network library for JAX that is designed for flexibility
|
41 |
+
* [https://github.com/kubernetes-sigs/lws](https://github.com/kubernetes-sigs/lws) facilitates Kubernetes deployment patterns for AI/ML inference workloads, especially multi-host inference workloads
|
42 |
+
* [https://github.com/GoogleCloudPlatform/ai-on-gke](https://github.com/GoogleCloudPlatform/ai-on-gke) is a collection of AI examples, best-practices, and prebuilt solutions
|
43 |
+
* **Google AI Research Papers**: [https://research.google/](https://research.google/)
|
44 |
+
|
45 |
+
## On-device ML using [Google AI Edge](http://ai.google.dev/edge)
|
46 |
+
|
47 |
+
* Customize and run common ML Tasks with low-code [MediaPipe Solutions](https://ai.google.dev/edge/mediapipe/solutions/guide)
|
48 |
+
* Run [pretrained](https://ai.google.dev/edge/litert/models/trained) or custom models on-device with [Lite RT (previously known as TensorFlow Lite)](https://ai.google.dev/edge/lite)
|
49 |
+
* Convert [TensorFlow](https://ai.google.dev/edge/lite/models/convert_tf) and [JAX](https://ai.google.dev/edge/lite/models/convert_jax) models to LiteRT
|
50 |
+
* Convert PyTorch models to LiteRT and author high performance on-device LLMs with [AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch)
|
51 |
+
* Visualize and debug models with [Model Explorer](https://ai.google.dev/edge/model-explorer) ([🤗 Space](https://huggingface.co/spaces/google/model-explorer))
|
52 |
+
|
53 |
+
## Partnership Highlights and Resources
|
54 |
+
|
55 |
+
* Select Google Cloud CPU, GPU, or TPU options when setting up your **Hugging Face [Inference Endpoints](https://huggingface.co/blog/tpu-inference-endpoints-spaces) and Spaces**
|
56 |
+
* **Train and Deploy Hugging Face models** on Google Kubernetes Engine (GKE) and Vertex AI **directly from Hugging Face model landing pages or from Google Cloud Model Garden.**
|
57 |
+
* **Integrate [Colab](https://colab.research.google.com/) notebooks with Hugging Face Hub** via the [HF\_TOKEN secret manager integration](https://huggingface.co/docs/huggingface_hub/v0.23.3/en/quick-start#environment-variable) and transformers/huggingface\_hub pre-installs
|
58 |
+
* Leverage [**Hugging Face Deep Learning Containers (DLCs)**](https://cloud.google.com/deep-learning-containers/docs/choosing-container#hugging-face) for easy training and deployment of Hugging Face models on Google Cloud infrastructure.
|
59 |
+
|
60 |
+
Read about our principles for responsible AI at [https://ai.google/responsibility/principles](https://ai.google/responsibility/principles/)
|