AlicanKiraz0 commited on
Commit
f04dbf4
·
verified ·
1 Parent(s): 1e57b82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -3
README.md CHANGED
@@ -2,13 +2,48 @@
2
  license: mit
3
  language:
4
  - en
5
- base_model: AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity
 
6
  pipeline_tag: text-classification
7
  tags:
8
- - llama-cpp
9
  - gguf-my-repo
 
 
 
 
 
10
  ---
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  # AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q2_K-GGUF
13
  This model was converted to GGUF format from [`AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity`](https://huggingface.co/AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity) for more details on the model.
@@ -51,4 +86,4 @@ Step 3: Run inference through the main binary.
51
  or
52
  ```
53
  ./llama-server --hf-repo AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q2_K-GGUF --hf-file senecallm_x_qwen2.5-7b-cybersecurity-q2_k.gguf -c 2048
54
- ```
 
2
  license: mit
3
  language:
4
  - en
5
+ base_model:
6
+ - Qwen/Qwen2.5-Coder-7B-Instruct
7
  pipeline_tag: text-classification
8
  tags:
 
9
  - gguf-my-repo
10
+ - qwen2.5
11
+ - cybersecurity
12
+ - ethicalhacking
13
+ - informationsecurity
14
+ - pentest
15
  ---
16
 
17
+ <img src="https://huggingface.co/AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q4_K_M-GGUF/resolve/main/SenecaLLMxqwen2.5-7B.webp" width="1000" />
18
+
19
+ Curated and trained by Alican Kiraz
20
+
21
+ [![Linkedin](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://tr.linkedin.com/in/alican-kiraz)
22
+ ![X (formerly Twitter) URL](https://img.shields.io/twitter/url?url=https%3A%2F%2Fx.com%2FAlicanKiraz0)
23
+ ![YouTube Channel Subscribers](https://img.shields.io/youtube/channel/subscribers/UCEAiUT9FMFemDtcKo9G9nUQ)
24
+
25
+ Links:
26
+ - Medium: https://alican-kiraz1.medium.com/
27
+ - Linkedin: https://tr.linkedin.com/in/alican-kiraz
28
+ - X: https://x.com/AlicanKiraz0
29
+ - YouTube: https://youtube.com/@alicankiraz0
30
+
31
+ SenecaLLM has been trained and fine-tuned for nearly one month—around 100 hours in total—using various systems such as 1x4090, 8x4090, and 3xH100, focusing on the following cybersecurity topics. Its goal is to think like a cybersecurity expert and assist with your questions. It has also been fine-tuned to counteract malicious use.
32
+
33
+ **It does not pursue any profit.**
34
+
35
+ Over time, it will specialize in the following areas:
36
+
37
+ - Incident Response
38
+ - Threat Hunting
39
+ - Code Analysis
40
+ - Exploit Development
41
+ - Reverse Engineering
42
+ - Malware Analysis
43
+
44
+ "Those who shed light on others do not remain in darkness..."
45
+
46
+
47
  # AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q2_K-GGUF
48
  This model was converted to GGUF format from [`AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity`](https://huggingface.co/AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
49
  Refer to the [original model card](https://huggingface.co/AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity) for more details on the model.
 
86
  or
87
  ```
88
  ./llama-server --hf-repo AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q2_K-GGUF --hf-file senecallm_x_qwen2.5-7b-cybersecurity-q2_k.gguf -c 2048
89
+ ```