|
|
--- |
|
|
tags: |
|
|
- object-detection |
|
|
- sam3 |
|
|
- segment-anything |
|
|
- bounding-boxes |
|
|
- uv-script |
|
|
- generated |
|
|
--- |
|
|
|
|
|
# Object Detection: Photograph Detection using sam3 |
|
|
|
|
|
This dataset contains object detection results (bounding boxes) for **photograph** detected in images from [NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset](https://huggingface.co/datasets/NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset) using Meta's SAM3 (Segment Anything Model 3). |
|
|
|
|
|
**Generated using**: [uv-scripts/sam3](https://huggingface.co/datasets/uv-scripts/sam3) detection script |
|
|
|
|
|
## Detection Statistics |
|
|
|
|
|
- **Objects Detected**: photograph |
|
|
- **Total Detections**: 4,500 |
|
|
- **Images with Detections**: 1,500 / 1,500 (100.0%) |
|
|
- **Average Detections per Image**: 3.00 |
|
|
|
|
|
## Processing Details |
|
|
|
|
|
- **Source Dataset**: [NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset](https://huggingface.co/datasets/NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset) |
|
|
- **Model**: [facebook/sam3](https://huggingface.co/facebook/sam3) |
|
|
- **Script Repository**: [uv-scripts/sam3](https://huggingface.co/datasets/uv-scripts/sam3) |
|
|
- **Number of Samples Processed**: 1,500 |
|
|
- **Processing Time**: 2.2 minutes |
|
|
- **Processing Date**: 2025-11-22 16:45 UTC |
|
|
|
|
|
### Configuration |
|
|
|
|
|
- **Image Column**: `image` |
|
|
- **Dataset Split**: `train` |
|
|
- **Class Name**: `photograph` |
|
|
- **Confidence Threshold**: 0.5 |
|
|
- **Mask Threshold**: 0.5 |
|
|
- **Batch Size**: 8 |
|
|
- **Model Dtype**: bfloat16 |
|
|
|
|
|
## Model Information |
|
|
|
|
|
SAM3 (Segment Anything Model 3) is Meta's state-of-the-art object detection and segmentation model that excels at: |
|
|
- 🎯 **Zero-shot detection** - Detect objects using natural language prompts |
|
|
- 📦 **Bounding boxes** - Accurate object localization |
|
|
- 🎭 **Instance segmentation** - Pixel-perfect masks (not included in this dataset) |
|
|
- 🖼️ **Any image domain** - Works on photos, documents, medical images, etc. |
|
|
|
|
|
This dataset uses SAM3 in text-prompted detection mode to find instances of "photograph" in the source images. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset contains all original columns from the source dataset plus an `objects` column with detection results in HuggingFace object detection format (dict-of-lists): |
|
|
|
|
|
- **bbox**: List of bounding boxes in `[x, y, width, height]` format (pixel coordinates) |
|
|
- **category**: List of category indices (always `0` for single-class detection) |
|
|
- **score**: List of confidence scores (0.0 to 1.0) |
|
|
|
|
|
### Schema |
|
|
|
|
|
```python |
|
|
{ |
|
|
"objects": { |
|
|
"bbox": [[x, y, w, h], ...], # List of bounding boxes |
|
|
"category": [0, 0, ...], # All same class |
|
|
"score": [0.95, 0.87, ...] # Confidence scores |
|
|
} |
|
|
} |
|
|
``` |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the dataset |
|
|
dataset = load_dataset("{{output_dataset_id}}", split="train") |
|
|
|
|
|
# Access detections for an image |
|
|
example = dataset[0] |
|
|
detections = example["objects"] |
|
|
|
|
|
# Iterate through all detected objects in this image |
|
|
for bbox, category, score in zip( |
|
|
detections["bbox"], |
|
|
detections["category"], |
|
|
detections["score"] |
|
|
): |
|
|
x, y, w, h = bbox |
|
|
print(f"Detected photograph at ({x}, {y}) with confidence {score:.2f}") |
|
|
|
|
|
# Filter high-confidence detections |
|
|
high_conf_examples = [ |
|
|
ex for ex in dataset |
|
|
if any(score > 0.8 for score in ex["objects"]["score"]) |
|
|
] |
|
|
|
|
|
# Count total detections across dataset |
|
|
total = sum(len(ex["objects"]["bbox"]) for ex in dataset) |
|
|
print(f"Total detections: {total}") |
|
|
``` |
|
|
|
|
|
## Visualization |
|
|
|
|
|
To visualize the detections, you can use the visualization script from the same repository: |
|
|
|
|
|
```bash |
|
|
# Visualize first sample with detections |
|
|
uv run https://huggingface.co/datasets/uv-scripts/sam3/raw/main/visualize-detections.py \ |
|
|
{{output_dataset_id}} \ |
|
|
--first-with-detections |
|
|
|
|
|
# Visualize random samples |
|
|
uv run https://huggingface.co/datasets/uv-scripts/sam3/raw/main/visualize-detections.py \ |
|
|
{{output_dataset_id}} \ |
|
|
--num-samples 5 |
|
|
|
|
|
# Save visualizations to files |
|
|
uv run https://huggingface.co/datasets/uv-scripts/sam3/raw/main/visualize-detections.py \ |
|
|
{{output_dataset_id}} \ |
|
|
--num-samples 3 \ |
|
|
--output-dir ./visualizations |
|
|
``` |
|
|
|
|
|
## Reproduction |
|
|
|
|
|
This dataset was generated using the [uv-scripts/sam3](https://huggingface.co/datasets/uv-scripts/sam3) object detection script: |
|
|
|
|
|
```bash |
|
|
uv run https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \ |
|
|
NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \ |
|
|
<output-dataset> \ |
|
|
--class-name photograph \ |
|
|
--confidence-threshold 0.5 \ |
|
|
--mask-threshold 0.5 \ |
|
|
--batch-size 8 \ |
|
|
--dtype bfloat16 |
|
|
``` |
|
|
|
|
|
### Running on HuggingFace Jobs (GPU) |
|
|
|
|
|
This script requires a GPU. To run on HuggingFace infrastructure: |
|
|
|
|
|
```bash |
|
|
hf jobs uv run --flavor a100-large \ |
|
|
-s HF_TOKEN=HF_TOKEN \ |
|
|
https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \ |
|
|
NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \ |
|
|
<output-dataset> \ |
|
|
--class-name photograph \ |
|
|
--confidence-threshold 0.5 |
|
|
``` |
|
|
|
|
|
## Performance |
|
|
|
|
|
- **Processing Speed**: ~11.4 images/second |
|
|
- **GPU Configuration**: CUDA with bfloat16 precision |
|
|
|
|
|
--- |
|
|
|
|
|
Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts) |
|
|
|