File size: 2,393 Bytes
081ec6c 529edcd 081ec6c 529edcd 081ec6c 529edcd 34fb059 529edcd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
title: Hugging Face Machine Learning Optimization Team
emoji: 🤗
colorFrom: yellow
colorTo: yellow
sdk: static
pinned: false
short_description: Hugging Face ML Opt Team Page
---
# Hugging Face Machine Learning Optimizations Team
## About Hugging Face's mission
Our mission is to democratize good machine learning.
We want to build the platform for AI builder empowering all the communities towards building collaborative technologies.
Hugging Face is a decentralized, highly impact-oriented, autonomous-driven company.
## What does it mean to be part of the Machine Learning Optimization Team at Hugging Face?
Being part of the Machine Learning Optimization Team usually involves new hire to jump into a program with one (or multiple) partner(s) as its main project, supporting Hugging Face overall monetization strategy.
There is no real definition of what projects look like, every partner have different maturity, targets and scopes.
We kind of surf over what we observe from a community and Hugging Face products usages to drive the features development with our partners.
While most of the work will usually happen for a partner, we also encourage members of the team to have some time to work on personal project they think would be relevant towards driving more revenues for Hugging Face.
Last but not least, while belonging to the monetization side of the company, we are very central and open-source builders. There are many opportunities to collaborate with other teams and projects from OSS / Community, the Hugging Face Hub and also the Infrastructure...
## References
Looking for some real use-cases of what we are diving for Hugging Face? Here is a non-exhausitive list of projects/achievements/sprints we did in the past:
- [Hugging Face on AMD Instinct MI300 GPU](https://huggingface.co/blog/huggingface-amd-mi300)
- [Hugging Face Text Generation Inference available for AWS Inferentia2](https://huggingface.co/blog/text-generation-inference-on-inferentia2)
- [Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon](https://huggingface.co/blog/cost-efficient-rag-applications-with-intel)
- [Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator](https://huggingface.co/blog/habana-gaudi-2-bloom)
- [Scaling up BERT-like model Inference on modern CPU](https://huggingface.co/blog/bert-cpu-scaling-part-1) |