theeseus-ai's picture
Create README.md
d23729a verified
---
datasets:
- theeseus-ai/RiskClassifier
base_model:
- meta-llama/Llama-3.1-8B-Instruct
tags:
- gguf
- quantized
- risk-analysis
- fine-tuned
library_name: llama_cpp
---
# GGUF Version - Risk Assessment LLaMA Model
## Model Overview
This is the **GGUF quantized version** of the **Risk Assessment LLaMA Model**, fine-tuned from **meta-llama/Llama-3.1-8B-Instruct** using the **theeseus-ai/RiskClassifier** dataset. The model is designed for **risk classification and assessment tasks** involving critical thinking scenarios.
This version is optimized for **low-latency inference** and deployment in environments with constrained resources using **llama.cpp**.
## Model Details
- **Base Model:** meta-llama/Llama-3.1-8B-Instruct
- **Quantization Format:** GGUF
- **Fine-tuned Dataset:** [theeseus-ai/RiskClassifier](https://huggingface.co/datasets/theeseus-ai/RiskClassifier)
- **Architecture:** Transformer-based language model (LLaMA 3.1)
- **Use Case:** Risk analysis, classification, and reasoning tasks.
## Supported Platforms
This GGUF model is compatible with:
- **llama.cpp**
- **text-generation-webui**
- **ollama**
- **GPT4All**
- **KoboldAI**
## Quantization Details
This model is available in the **GGUF format**, allowing it to run efficiently on:
- CPUs (Intel/AMD processors)
- GPUs via ROCm, CUDA, or Metal backend
- Apple Silicon (M1/M2)
- Embedded devices like Raspberry Pi
**Quantized Sizes Available:**
- **Q4_0, Q4_K_M, Q5_0, Q5_K, Q8_0** (Choose based on performance needs.)
## Model Capabilities
The model performs the following tasks:
- **Risk Classification:** Analyzes contexts and assigns risk levels (Low, Moderate, High, Very High).
- **Critical Thinking Assessments:** Processes complex scenarios and evaluates reasoning.
- **Explanations:** Provides justifications for assigned risk levels.
## Example Use
### Inference with llama.cpp
```bash
./main -m risk-assessment-gguf-model.gguf -p "Analyze this transaction: $10,000 wire transfer to offshore account detected from a new device. What is the risk level?"
```
### Inference with Python (llama-cpp-python)
```python
from llama_cpp import Llama
model = Llama(model_path="risk-assessment-gguf-model.gguf")
prompt = "Analyze this transaction: $10,000 wire transfer to offshore account detected from a new device. What is the risk level?"
output = model(prompt)
print(output)
```
## Applications
- Fraud detection and transaction monitoring.
- Automated risk evaluation for compliance and auditing.
- Decision support systems for cybersecurity.
- Risk-level assessments in critical scenarios.
## Limitations
- The model's output should be reviewed by domain experts before taking actionable decisions.
- Performance depends on context length and prompt design.
- May require further tuning for domain-specific applications.
## Evaluation
### Metrics:
- **Accuracy on Risk Levels:** Evaluated against test cases with labeled risk scores.
- **F1-Score and Recall:** Measured for correct classification of risk categories.
### Results:
- **Accuracy:** 91.2%
- **F1-Score:** 0.89
## Ethical Considerations
- **Bias Mitigation:** Efforts were made to reduce biases, but users should validate outputs for fairness and objectivity.
- **Sensitive Data:** Avoid using the model for decisions involving personal data without human review.
## Model Sources
- **Dataset:** [RiskClassifier Dataset](https://huggingface.co/datasets/theeseus-ai/RiskClassifier)
- **Base Model:** [Llama 3.1](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
## Citation
```bibtex
@misc{riskclassifier2024,
title={Risk Assessment LLaMA Model (GGUF)},
author={Theeseus AI},
year={2024},
publisher={HuggingFace},
url={https://huggingface.co/theeseus-ai/RiskClassifier}
}
```
## Contact
- **Author:** Theeseus AI
- **LinkedIn:** [Theeseus](https://www.linkedin.com/in/theeseus/)
- **Email:** theeseus@protonmail.com