File size: 3,916 Bytes
d23729a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
---
datasets:
- theeseus-ai/RiskClassifier
base_model:
- meta-llama/Llama-3.1-8B-Instruct
tags:
- gguf
- quantized
- risk-analysis
- fine-tuned
library_name: llama_cpp
---
# GGUF Version - Risk Assessment LLaMA Model
## Model Overview
This is the **GGUF quantized version** of the **Risk Assessment LLaMA Model**, fine-tuned from **meta-llama/Llama-3.1-8B-Instruct** using the **theeseus-ai/RiskClassifier** dataset. The model is designed for **risk classification and assessment tasks** involving critical thinking scenarios.
This version is optimized for **low-latency inference** and deployment in environments with constrained resources using **llama.cpp**.
## Model Details
- **Base Model:** meta-llama/Llama-3.1-8B-Instruct
- **Quantization Format:** GGUF
- **Fine-tuned Dataset:** [theeseus-ai/RiskClassifier](https://huggingface.co/datasets/theeseus-ai/RiskClassifier)
- **Architecture:** Transformer-based language model (LLaMA 3.1)
- **Use Case:** Risk analysis, classification, and reasoning tasks.
## Supported Platforms
This GGUF model is compatible with:
- **llama.cpp**
- **text-generation-webui**
- **ollama**
- **GPT4All**
- **KoboldAI**
## Quantization Details
This model is available in the **GGUF format**, allowing it to run efficiently on:
- CPUs (Intel/AMD processors)
- GPUs via ROCm, CUDA, or Metal backend
- Apple Silicon (M1/M2)
- Embedded devices like Raspberry Pi
**Quantized Sizes Available:**
- **Q4_0, Q4_K_M, Q5_0, Q5_K, Q8_0** (Choose based on performance needs.)
## Model Capabilities
The model performs the following tasks:
- **Risk Classification:** Analyzes contexts and assigns risk levels (Low, Moderate, High, Very High).
- **Critical Thinking Assessments:** Processes complex scenarios and evaluates reasoning.
- **Explanations:** Provides justifications for assigned risk levels.
## Example Use
### Inference with llama.cpp
```bash
./main -m risk-assessment-gguf-model.gguf -p "Analyze this transaction: $10,000 wire transfer to offshore account detected from a new device. What is the risk level?"
```
### Inference with Python (llama-cpp-python)
```python
from llama_cpp import Llama
model = Llama(model_path="risk-assessment-gguf-model.gguf")
prompt = "Analyze this transaction: $10,000 wire transfer to offshore account detected from a new device. What is the risk level?"
output = model(prompt)
print(output)
```
## Applications
- Fraud detection and transaction monitoring.
- Automated risk evaluation for compliance and auditing.
- Decision support systems for cybersecurity.
- Risk-level assessments in critical scenarios.
## Limitations
- The model's output should be reviewed by domain experts before taking actionable decisions.
- Performance depends on context length and prompt design.
- May require further tuning for domain-specific applications.
## Evaluation
### Metrics:
- **Accuracy on Risk Levels:** Evaluated against test cases with labeled risk scores.
- **F1-Score and Recall:** Measured for correct classification of risk categories.
### Results:
- **Accuracy:** 91.2%
- **F1-Score:** 0.89
## Ethical Considerations
- **Bias Mitigation:** Efforts were made to reduce biases, but users should validate outputs for fairness and objectivity.
- **Sensitive Data:** Avoid using the model for decisions involving personal data without human review.
## Model Sources
- **Dataset:** [RiskClassifier Dataset](https://huggingface.co/datasets/theeseus-ai/RiskClassifier)
- **Base Model:** [Llama 3.1](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
## Citation
```bibtex
@misc{riskclassifier2024,
title={Risk Assessment LLaMA Model (GGUF)},
author={Theeseus AI},
year={2024},
publisher={HuggingFace},
url={https://huggingface.co/theeseus-ai/RiskClassifier}
}
```
## Contact
- **Author:** Theeseus AI
- **LinkedIn:** [Theeseus](https://www.linkedin.com/in/theeseus/)
- **Email:** theeseus@protonmail.com
|