pcuenq HF staff commited on
Commit
c354463
β€’
1 Parent(s): c84aef0
Files changed (1) hide show
  1. README.md +15 -21
README.md CHANGED
@@ -9,35 +9,29 @@ pinned: false
9
 
10
  Welcome to the official Hugging Face organisation for Apple!
11
 
12
- # Apple Core ML – Build intelligence into your apps with Core ML
13
 
14
  [Core ML](https://developer.apple.com/machine-learning/core-ml/) is optimized for on-device performance of a broad variety of model types by leveraging Apple Silicon and minimizing memory footprint and power consumption.
15
 
16
- ## Core ML Models
17
-
18
- - [FastViT](https://huggingface.co/collections/coreml-projects/coreml-fastvit-666b0053e54816747071d755): Image Classification
19
- - [Depth Anything](https://huggingface.co/coreml-projects/coreml-depth-anything-small): Depth estimation
20
- - [DETR Resnet50](https://huggingface.co/coreml-projects/coreml-detr-semantic-segmentation): Semantic Segmentation
21
- - [Stable Diffusion Core ML models](https://huggingface.co/collections/apple/core-ml-stable-diffusion-666b3b0f4b5f3d33c67c6bbe)
22
- - [Hugging Face Core ML Examples](https://github.com/huggingface/coreml-examples)
23
 
24
  # Apple Machine Learning Research
25
 
26
  Open research to enable the community to deliver amazing experiences that improve the lives of millions of people every day.
27
 
28
- ## Models
29
-
30
- - OpenELM: open, Transformer-based language model. [Base](https://huggingface.co/collections/apple/openelm-pretrained-models-6619ac6ca12a10bd0d0df89e) | [Instruct](https://huggingface.co/collections/apple/openelm-instruct-models-6619ad295d7ae9f868b759ca)
31
- - [MobileCLIP](https://huggingface.co/collections/apple/mobileclip-models-datacompdr-data-665789776e1aa2b59f35f7c8): Mobile-friendly image-text models.
32
-
33
- ## Datasets
34
-
35
- - [FLAIR](https://huggingface.co/datasets/apple/flair): A large image dataset for federated learning.
36
- - [DataCompDR](https://huggingface.co/collections/apple/mobileclip-models-datacompdr-data-665789776e1aa2b59f35f7c8): Improved datasets for training image-text models.
37
-
38
- ## Benchmarks
39
-
40
- - [TiC-CLIP](https://huggingface.co/collections/apple/tic-clip-666097407ed2edff959276e0): Benchmark for the design of efficient continual learning of image-text models over years
41
 
42
  # Select Highlights and Other Resources
43
 
 
9
 
10
  Welcome to the official Hugging Face organisation for Apple!
11
 
12
+ ## Apple Core ML – Build intelligence into your apps with Core ML
13
 
14
  [Core ML](https://developer.apple.com/machine-learning/core-ml/) is optimized for on-device performance of a broad variety of model types by leveraging Apple Silicon and minimizing memory footprint and power consumption.
15
 
16
+ * Core ML Models
17
+ - [FastViT](https://huggingface.co/collections/coreml-projects/coreml-fastvit-666b0053e54816747071d755): Image Classification
18
+ - [Depth Anything](https://huggingface.co/coreml-projects/coreml-depth-anything-small): Depth estimation
19
+ - [DETR Resnet50](https://huggingface.co/coreml-projects/coreml-detr-semantic-segmentation): Semantic Segmentation
20
+ - [Stable Diffusion Core ML models](https://huggingface.co/collections/apple/core-ml-stable-diffusion-666b3b0f4b5f3d33c67c6bbe)
21
+ - [Hugging Face Core ML Examples](https://github.com/huggingface/coreml-examples)
 
22
 
23
  # Apple Machine Learning Research
24
 
25
  Open research to enable the community to deliver amazing experiences that improve the lives of millions of people every day.
26
 
27
+ * Models
28
+ - OpenELM: open, Transformer-based language model. [Base](https://huggingface.co/collections/apple/openelm-pretrained-models-6619ac6ca12a10bd0d0df89e) | [Instruct](https://huggingface.co/collections/apple/openelm-instruct-models-6619ad295d7ae9f868b759ca)
29
+ - [MobileCLIP](https://huggingface.co/collections/apple/mobileclip-models-datacompdr-data-665789776e1aa2b59f35f7c8): Mobile-friendly image-text models.
30
+ * Datasets
31
+ - [FLAIR](https://huggingface.co/datasets/apple/flair): A large image dataset for federated learning.
32
+ - [DataCompDR](https://huggingface.co/collections/apple/mobileclip-models-datacompdr-data-665789776e1aa2b59f35f7c8): Improved datasets for training image-text models.
33
+ * Benchmarks
34
+ - [TiC-CLIP](https://huggingface.co/collections/apple/tic-clip-666097407ed2edff959276e0): Benchmark for the design of efficient continual learning of image-text models over years
 
 
 
 
 
35
 
36
  # Select Highlights and Other Resources
37