File size: 1,696 Bytes
90b38ef 9cec5b0 7dfc503 9cec5b0 e68d0b7 9cec5b0 4afe3c6 23ce26c 9cec5b0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
license: cc0-1.0
---
**Note:** Due to nature of toxic comments data and code contain explicit language.
Data is from kaggle, the *Toxic Comment Classification Challenge*
<br>
https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/data?select=train.csv.zip
A copy of the date exists on the data directory.
Trained over 20 epoch in a runpod
Code requires pandas, tensorflow, and streamlit. All can be installed via `pip`.
```python
import os
import pickle
import streamlit as st
import tensorflow as tf
from tensorflow.keras.layers import TextVectorization
@st.cache_resource
def load_model():
model = tf.keras.models.load_model(os.path.join("model", "toxmodel.keras"))
return model
@st.cache_resource
def load_vectorizer():
from_disk = pickle.load(open(os.path.join("model", "vectorizer.pkl"), "rb"))
new_v = TextVectorization.from_config(from_disk['config'])
new_v.adapt(tf.data.Dataset.from_tensor_slices(["xyz"])) # Keras bug
new_v.set_weights(from_disk['weights'])
return new_v
st.title("Toxic Comment Test")
st.divider()
model = load_model()
vectorizer = load_vectorizer()
input_text = st.text_area("Comment:", "I love you man, but fuck you!", height=150)
if st.button("Test"):
with st.spinner("Testing..."):
inputv = vectorizer([input_text])
output = model.predict(inputv)
res = (output > 0.5)
st.write(["toxic","severe toxic","obscene","threat","insult","identity hate"], res)
print(output)
```
Put `toxmodel.keras` and `vectorizer.pkl` into the `model` dir.
Then do do:
```
stramlit run toxtest.py
```
Full code can be found here:
<br>
https://github.com/vluz/ToxTest/
<br> |