Spaces:
Sleeping
Sleeping
Lauredecaudin
commited on
Commit
•
3a39df6
1
Parent(s):
2a4ac7e
Update pages/5-A brief note on Bias and evaluation.py
Browse files
pages/5-A brief note on Bias and evaluation.py
CHANGED
@@ -8,7 +8,16 @@ def bias_evaluation_presentation():
|
|
8 |
st.subheader("You can check yourself if the model is biased or toxic !")
|
9 |
st.markdown("""
|
10 |
### Evaluate a generated content
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
""")
|
|
|
12 |
|
13 |
def toxicity_classif(text):
|
14 |
model_path = "citizenlab/distilbert-base-multilingual-cased-toxicity"
|
|
|
8 |
st.subheader("You can check yourself if the model is biased or toxic !")
|
9 |
st.markdown("""
|
10 |
### Evaluate a generated content
|
11 |
+
|
12 |
+
As you're using our app to enhance your resume with the help of language models, it's important to keep in mind that while these tools can be incredibly powerful, they aren't perfect.
|
13 |
+
Two key things we evaluate in the content generated are toxicity and bias.
|
14 |
+
|
15 |
+
- Toxicity: This refers to any language that could be harmful or inappropriate, like offensive words or unprofessional phrasing. Our goal is to ensure that your resume remains positive and professional, free from any content that could be seen as disrespectful or inappropriate in a work setting.
|
16 |
+
- Bias: Language models can sometimes unintentionally show preferences or stereotypes. This could result in outputs that lean towards certain genders, cultures, or other groups unfairly. We check for bias to make sure your resume reflects a balanced, fair tone and is inclusive for any workplace.
|
17 |
+
|
18 |
+
By monitoring for these issues, we help you create a resume that's not only polished but also respectful and professional, ready to make the best impression!
|
19 |
""")
|
20 |
+
|
21 |
|
22 |
def toxicity_classif(text):
|
23 |
model_path = "citizenlab/distilbert-base-multilingual-cased-toxicity"
|