added second screenshot
Browse files
README.md
CHANGED
@@ -109,6 +109,8 @@ extended_layout = load_dataset_builder("renumics/dcase23-task2-enriched", "dev")
|
|
109 |
spotlight.show(df, dtype={'path': spotlight.Audio, "embeddings_ast-finetuned-audioset-10-10-0.4593": spotlight.Embedding}, layout=extended_layout)
|
110 |
```
|
111 |
|
|
|
|
|
112 |
## Using custom model results and enrichments
|
113 |
|
114 |
When developing your custom model you want to use different kinds of information from you model (e.g. embedding, anomaly scores etc.) to gain further insights into the dataset and the model behvior.
|
@@ -116,7 +118,7 @@ When developing your custom model you want to use different kinds of information
|
|
116 |
Suppose you have your model's embeddings for each datapoint as a 2D-Numpy array called `embeddings` and your anomaly score as a 1D-Numpy array called `anomaly_scores`. Then you can add this information to the dataset:
|
117 |
```jupyterpython
|
118 |
df['my_model_embedding'] = embeddings
|
119 |
-
df['anomaly_score'] =
|
120 |
```
|
121 |
Depending on your concrete task you might want to use different enrichments. For a good overview on great open source tooling for uncertainty quantification, explainability and outlier detection, you can take a look at our [curated list for open source data-centric AI tooling](https://github.com/Renumics/awesome-open-data-centric-ai) on Github.
|
122 |
|
|
|
109 |
spotlight.show(df, dtype={'path': spotlight.Audio, "embeddings_ast-finetuned-audioset-10-10-0.4593": spotlight.Embedding}, layout=extended_layout)
|
110 |
```
|
111 |
|
112 |
+
![Analyze DCASE23 Task 2 with Spotlight](data/preview_dcase_2.png "Analyze DCASE23 Task 2 with Spotlight")
|
113 |
+
|
114 |
## Using custom model results and enrichments
|
115 |
|
116 |
When developing your custom model you want to use different kinds of information from you model (e.g. embedding, anomaly scores etc.) to gain further insights into the dataset and the model behvior.
|
|
|
118 |
Suppose you have your model's embeddings for each datapoint as a 2D-Numpy array called `embeddings` and your anomaly score as a 1D-Numpy array called `anomaly_scores`. Then you can add this information to the dataset:
|
119 |
```jupyterpython
|
120 |
df['my_model_embedding'] = embeddings
|
121 |
+
df['anomaly_score'] = anomaly_scores
|
122 |
```
|
123 |
Depending on your concrete task you might want to use different enrichments. For a good overview on great open source tooling for uncertainty quantification, explainability and outlier detection, you can take a look at our [curated list for open source data-centric AI tooling](https://github.com/Renumics/awesome-open-data-centric-ai) on Github.
|
124 |
|