Daniel De Leon

daniel-de-leon

AI & ML interests

None yet

Articles

Organizations

Posts 1

view post
Post
2380
As the rapid adoption of chat bots and QandA models continues, so do the concerns for their reliability and safety. In response to this, many state-of-the-art models are being tuned to act as Safety Guardrails to protect against malicious usage and avoid undesired, harmful output. I published a Hugging Face blog introducing a simple, proof-of-concept, RoBERTa-based LLM that my team and I finetuned to detect toxic prompt inputs into chat-style LLMs. The article explores some of the tradeoffs of fine-tuning larger decoder vs. smaller encoder models and asks the question if "simpler is better" in the arena of toxic prompt detection.

๐Ÿ”— to blog: https://huggingface.co/blog/daniel-de-leon/toxic-prompt-roberta
๐Ÿ”— to model: Intel/toxic-prompt-roberta
๐Ÿ”— to OPEA microservice: https://github.com/opea-project/GenAIComps/tree/main/comps/guardrails/toxicity_detection

A huge thank you to my colleagues that helped contribute: @qgao007 , @mitalipo , @ashahba and Fahim Mohammad

datasets

None public yet