βοΈπ¦ Provenance, Watermarking & Deepfake Detection
Technical tools for more control over non-consensual synthetic content
Sleepingπ’Note Example of experimental audio watermarking research.
Runtime error94π§A Watermark for LLMs
Note Imperceptibly marks content generated by an LLM as being synthetic.
Running3πFawkes
Note Fawkes: Image "poisoning", which disrupts the ability to create facial recognition models.
Sleeping18πImage Watermarking for Stable Diffusion XL
Note From Imatag: Robustly mark images as your own.
Sleeping39πWatermarked Content Credentials
Note From Truepic: Sign your images with C2PA Content Credentials.
Sleeping20πGenAI with Content Credentials
Note From Truepic: Watermark your images with pointers to original C2PA Content Credentials.
Runtime error38π‘Photoguard
Note From authors of Photoguard: Image "guarding", which makes an image immune to direct editing by generative models.
Runtime error47π’Photoguard
Safeguard images against ML-based photo manipulation
Note Community Contribution of Photoguard: Image "guarding", which makes an image immune to direct editing by generative models.
Running on A10G5π³Dendrokronos
Note Community Contribution of the University of Maryland's image watermark for diffusion models.
Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
Paper β’ 2302.04222 β’ Published β’ 1Robust Image Watermarking using Stable Diffusion
Paper β’ 2401.04247 β’ PublishedThree Bricks to Consolidate Watermarks for Large Language Models
Paper β’ 2308.00113 β’ Published β’ 13- Runtime error8π
CNN Deepfake Image Detection