Triangle104 commited on
Commit
626f5b4
ยท
verified ยท
1 Parent(s): 6407f76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -17,6 +17,45 @@ language:
17
  This model was converted to GGUF format from [`aixonlab/Valkyyrie-14b-v1`](https://huggingface.co/aixonlab/Valkyyrie-14b-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
  Refer to the [original model card](https://huggingface.co/aixonlab/Valkyyrie-14b-v1) for more details on the model.
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Use with llama.cpp
21
  Install llama.cpp through brew (works on Mac and Linux)
22
 
 
17
  This model was converted to GGUF format from [`aixonlab/Valkyyrie-14b-v1`](https://huggingface.co/aixonlab/Valkyyrie-14b-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
  Refer to the [original model card](https://huggingface.co/aixonlab/Valkyyrie-14b-v1) for more details on the model.
19
 
20
+ ---
21
+ Model details:
22
+ -
23
+ Valkyyrie 14b v1 is a fine-tuned large language model based on Microsoft's Phi-4, further trained to have better conversation capabilities.
24
+
25
+ Details ๐Ÿ“Š
26
+
27
+ Developed by: AIXON Lab
28
+ Model type: Causal Language Model
29
+ Language(s): English (primarily), may support other languages
30
+ License: apache-2.0
31
+ Repository: https://huggingface.co/aixonlab/Valkyyrie-14b-v1
32
+
33
+ Model Architecture ๐Ÿ—๏ธ
34
+
35
+ Base model: phi-4
36
+ Parameter count: ~14 billion
37
+ Architecture specifics: Transformer-based language model
38
+
39
+
40
+ Training & Fine-tuning ๐Ÿ”„
41
+
42
+ Valkyyrie-14b-v1 was fine-tuned to achieve -
43
+
44
+ Better conversational skills
45
+ Better creativity for writing and conversations.
46
+ Broader knowledge across various topics
47
+ Improved performance on specific tasks like writing, analysis, and problem-solving
48
+ Better contextual understanding and response generation
49
+
50
+ Intended Use ๐ŸŽฏ
51
+
52
+ As an assistant or specific role bot.
53
+
54
+ Ethical Considerations ๐Ÿค”
55
+
56
+ As a fine-tuned model based on phi-4, this model may inherit biases and limitations from its parent model and the fine-tuning dataset. Users should be aware of potential biases in generated content and use the model responsibly.
57
+
58
+ ---
59
  ## Use with llama.cpp
60
  Install llama.cpp through brew (works on Mac and Linux)
61