Update README.md
Browse files
README.md
CHANGED
@@ -1,32 +1,23 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
-
-
|
4 |
- generic
|
5 |
library_name: generic
|
6 |
dataset:
|
7 |
- oxfort-iit pets
|
8 |
widget:
|
9 |
-
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/
|
10 |
-
example_title:
|
11 |
-
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/
|
12 |
-
example_title:
|
13 |
-
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/
|
14 |
-
example_title:
|
15 |
license: cc0-1.0
|
16 |
---
|
17 |
-
##
|
18 |
-
Full credits go to [François Chollet](https://twitter.com/fchollet).
|
19 |
|
20 |
-
|
21 |
|
22 |
## Background Information
|
23 |
|
24 |
-
Image classification task tells us about a class assigned to an image, and object detection task creates a boundary box on an object in an image.
|
25 |
-
Semantic segmentation models classify pixels, meaning, they assign a class (can be cat or dog) to each pixel. The output of a model looks like following.
|
26 |
-
![Raw Output](./raw_output.jpg)
|
27 |
-
We need to get the best prediction for every pixel.
|
28 |
-
![Mask](./mask.jpg)
|
29 |
-
This is still not readable. We have to convert this into different binary masks for each class and convert to a readable format by converting each mask into base64. We will return a list of dicts, and for each dictionary, we have the label itself, the base64 code and a score (semantic segmentation models don't return a score, so we have to return 1.0 for this case). You can find the full implementation in ```pipeline.py```.
|
30 |
-
![Binary Mask](./binary_mask.jpg)
|
31 |
-
Now that you know the expected output by the model, you can host your Keras segmentation models (and other semantic segmentation models) in the similar fashion. Try it yourself and host your segmentation models!
|
32 |
-
![Segmented Cat](./hircin_the_cat.png)
|
|
|
1 |
---
|
2 |
tags:
|
3 |
+
- object-detection
|
4 |
- generic
|
5 |
library_name: generic
|
6 |
dataset:
|
7 |
- oxfort-iit pets
|
8 |
widget:
|
9 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
|
10 |
+
example_title: Savanna
|
11 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
|
12 |
+
example_title: Football Match
|
13 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
|
14 |
+
example_title: Airport
|
15 |
license: cc0-1.0
|
16 |
---
|
17 |
+
## Object Detection
|
|
|
18 |
|
19 |
+
We've changed the inference part to enable object detection widget on the Hub. (see ```pipeline.py```)
|
20 |
|
21 |
## Background Information
|
22 |
|
23 |
+
Image classification task tells us about a class assigned to an image, and object detection task creates a boundary box on an object in an image.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|