π³οΈ AI for Policymakers
AI systems have much to offer to policymakers, both as a tool to support their work and as a technology that can improve access to public services.
Runtime error22πNote Regulators around the world are leveraging documentation standards and transparency requirements as a way to make AI systems more reliable. This Space implements a test for any model shared on the Hub to check whether its documentation meets the standards outlined in the EU AI Act, and is a first step toward automatically assessing compliance for open systems.
Running98β‘Modelcard Creator
Note People with different professional backgrounds will have different strengths and weaknesses in filling out Model Cards. This tool helps bridge between all types, providing a graphical interface for creating Model Cards, which can also import/export in-progress Model Cards filled out in the typical developer .md format.
Runtime error94π§A Watermark for LLMs
Note Mitigations like watermarks have a major role to play to address potential misuses of generative AI systems, but they have to come with reliability guarantees - lack of transparency of these safeguards can also cause significant damage, e.g. by wrongly accusing students of cheating on exams. This Space based on open research and implementation of a watermark system for text generation lets users understand the approach's limitations and provides a strong basis for improving its reliability.
Sleeping2πWA Hospital Regulations Chatbot
Note AI systems can help users understand their rights and navigate complex regulations. While generative AI's propensity for hallucination makes it a poor choice for this, systems that focus on retrieving information from trustworthy sources based on a user's need can provide significant value. This Space leverages an open Natural Language Understanding model and a dataset of relevant regulations from WA state to help users answers questions like: "Who is allowed to visit an ICU patient?"
Runtime error1π€Task Exploration - Automatic Content Moderation
Note To protect from the harmful effects of AI systems, we need to understand how they fit in their deployment context, how their training datasets reflect historical biases, as well as the main failure cases of their base ML models. This Space explores the use case of Automatic Content Moderation, bringing together news reports, visualization of open datasets, and interactive demonstration of commons model failures to help users and policymakers understand where regulations can have the most impact.
Build error37πPresidio with custom PII models trained on PII data generated by Privy
Note Protecting people's privacy and PII is one of the main challenges of modern AI. Open models for PII detection and redaction support safer systems from a privacy perspectives from the pre-training to the deployment stage. This Space showcases an open model shared on the Hub that can help developers better document the privacy risks of their training data, and provide transparency about whose personal data may be encoded in AI systems.
dell-research-harvard/AmericanStories
Updated β’ 1.94k β’ 125Note Open AI Systems and datasets can unlock new ways of looking at national archive data, as is showcased by this open dataset of historical American newspapers with OCR annotations from Harvard University. By sharing these datasets and models, we support new fields of study and new tools for analyzing historical trends; and understanding present society!
Runtime error5πAmazonian Fish Classifier
Note AI systems shine by their breadth of applications. Open sharing and collaborative development of AI systems helps stakeholders unlock more useful capabilities by empowering them to build the right system for their own purpose. This Space developed by the Smithsonian Institute is an example of such a capability, where the researchers leveraged their data and an open pre-trained image classifier to build an Amazonian fish recognition system. Niche, but great when you need it!
- Runtime error12π
AIR π¬
- Running on T418π
Image Watermarking for Stable Diffusion XL
- Running33π
Lineage Explorer
- Running2π
New York CAP