doberst commited on
Commit
a2c982d
1 Parent(s): 466a139

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -6,17 +6,17 @@ tags: [green, p1, llmware-encoder, ov]
6
 
7
  # unitary-toxic-roberta-ov
8
 
9
- **unitary-toxic-roberta-ov** is a toxicity classifier from [unitary/unbiased-toxic-roberta](https://www.huggingface.com/unitary/unbiased-toxic-roberta), packaged in OpenVino format.
10
 
11
- The classifier can be used to evaluate toxic content in a prompt or in model output.
12
 
13
  ### Model Description
14
 
15
- - **Developed by:** unitary
16
  - **Quantized by:** llmware
17
- - **Model type:** roberta
18
- - **Parameters:** 125 million
19
- - **Model Parent:** unitary/unbiased-toxic-roberta
20
  - **Language(s) (NLP):** English
21
  - **License:** Apache 2.0
22
  - **Uses:** Prompt safety
 
6
 
7
  # unitary-toxic-roberta-ov
8
 
9
+ **unitary-toxic-roberta-ov** is a prompt injection risk classifier from [protectai/deberta-v3-base-prompt-injection-v2](https://www.huggingface.com/protectai/deberta-v3-base-prompt-injection-v2), packaged in OpenVino format.
10
 
11
+ The classifier can be used to evaluate prompt injection risk in a prompt.
12
 
13
  ### Model Description
14
 
15
+ - **Developed by:** protectai
16
  - **Quantized by:** llmware
17
+ - **Model type:** deberta
18
+ - **Parameters:** 184 million
19
+ - **Model Parent:** protectai/deberta-v3-base-prompt-injection-v2
20
  - **Language(s) (NLP):** English
21
  - **License:** Apache 2.0
22
  - **Uses:** Prompt safety