Update README.md
Browse files
README.md
CHANGED
@@ -13,55 +13,48 @@ tags:
|
|
13 |
- text-generation-inference
|
14 |
---
|
15 |
|
16 |
-
|
17 |
-
|
18 |
<p align="center">
|
19 |
<img width="700px" alt="DeepSeek Coder" src="https://cdn-uploads.huggingface.co/production/uploads/64b566ab04fa6584c03b5247/5COagfF6EwrV4utZJ-ClI.png">
|
20 |
</p>
|
21 |
<hr>
|
22 |
|
|
|
|
|
|
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
33 |
|
34 |
-
- **
|
35 |
-
|
36 |
-
- **Highly Flexible & Scalable**: Offered in model sizes of 7B, enabling users to run it locally.
|
37 |
-
|
38 |
-
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval.
|
39 |
-
|
40 |
-
- **Advanced Code Completion Capabilities**: A context window size of 8192 supporting project-level code completion.
|
41 |
|
42 |
-
|
43 |
-
CodeNinja uses the same prompt format than OpenChat 3.5 so you will need to use it to get good results.
|
44 |
|
45 |
-
|
|
|
|
|
46 |
|
47 |
```
|
48 |
GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:
|
49 |
```
|
50 |
-
🚨 Notice: Remember to set `<|end_of_turn|>` as end of generation token.
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
55 |
|
56 |
-
|
57 |
-
The easiest way to start using the model is to download one of the [quantized](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF) versions
|
58 |
-
using [LM Studio](https://lmstudio.ai/).
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
65 |
|
66 |
```python
|
67 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
@@ -70,37 +63,35 @@ import torch
|
|
70 |
# Initialize the model
|
71 |
model_path = "beowolx/CodeNinja-1.0-OpenChat-7B"
|
72 |
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
|
73 |
-
#
|
74 |
tokenizer = AutoTokenizer.from_pretrained("openchat/openchat-3.5-1210", use_fast=True)
|
75 |
|
76 |
def generate_one_completion(prompt: str):
|
77 |
messages = [
|
78 |
{"role": "user", "content": prompt},
|
79 |
-
{"role": "assistant", "content": ""} #
|
80 |
]
|
81 |
|
82 |
-
#
|
83 |
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
|
84 |
|
85 |
-
#
|
86 |
generate_ids = model.generate(
|
87 |
-
torch.tensor([input_ids]).to("cuda"),
|
88 |
max_length=256,
|
89 |
pad_token_id=tokenizer.pad_token_id,
|
90 |
eos_token_id=tokenizer.eos_token_id
|
91 |
)
|
92 |
|
93 |
-
#
|
94 |
completion = tokenizer.decode(generate_ids[0], skip_special_tokens=True)
|
95 |
completion = completion.split("\n\n\n")[0].strip()
|
96 |
|
97 |
return completion
|
98 |
```
|
99 |
|
100 |
-
##
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
## 5. Contact
|
105 |
|
106 |
-
|
|
|
|
13 |
- text-generation-inference
|
14 |
---
|
15 |
|
|
|
|
|
16 |
<p align="center">
|
17 |
<img width="700px" alt="DeepSeek Coder" src="https://cdn-uploads.huggingface.co/production/uploads/64b566ab04fa6584c03b5247/5COagfF6EwrV4utZJ-ClI.png">
|
18 |
</p>
|
19 |
<hr>
|
20 |
|
21 |
+
# CodeNinja: Your Advanced Coding Assistant
|
22 |
+
|
23 |
+
## Overview
|
24 |
|
25 |
+
CodeNinja is an enhanced version of the renowned model [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210). It represents a breakthrough in coding assistance, having been fine-tuned through Supervised Fine Tuning on two expansive datasets, encompassing over 400,000 coding instructions. Designed to be an indispensable tool for coders, CodeNinja aims to integrate seamlessly into your daily coding routine.
|
26 |
|
27 |
+
Discover the quantized versions at: [beowolx/CodeNinja-1.0-OpenChat-7B-GGUF](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF).
|
28 |
|
29 |
+
### Key Features
|
30 |
|
31 |
+
- **Expansive Training Database**: CodeNinja has been refined with datasets from [glaiveai/glaive-code-assistant-v2](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v2) and [TokenBender/code_instructions_122k_alpaca_style](https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style), incorporating around 400,000 coding instructions across various languages including Python, C, C++, Rust, Java, JavaScript, and more.
|
32 |
|
33 |
+
- **Flexibility and Scalability**: Available in a 7B model size, CodeNinja is adaptable for local runtime environments.
|
34 |
|
35 |
+
- **Exceptional Performance**: Achieves top-tier results among publicly accessible coding models, particularly notable on benchmarks like HumanEval.
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
+
- **Advanced Code Completion**: With a substantial context window size of 8192, it supports comprehensive project-level code completion.
|
|
|
38 |
|
39 |
+
## Prompt Format
|
40 |
+
|
41 |
+
CodeNinja maintains the same prompt structure as OpenChat 3.5. Effective utilization requires adherence to this format:
|
42 |
|
43 |
```
|
44 |
GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:
|
45 |
```
|
|
|
46 |
|
47 |
+
🚨 Important: Ensure the use of `<|end_of_turn|>` as the end-of-generation token.
|
48 |
|
49 |
+
**Adhering to this format is crucial for optimal results.**
|
50 |
|
51 |
+
## Usage Instructions
|
|
|
|
|
52 |
|
53 |
+
### Using LM Studio
|
54 |
|
55 |
+
The simplest way to engage with CodeNinja is via the [quantized versions](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF) on [LM Studio](https://lmstudio.ai/). Ensure you select the "OpenChat" preset, which incorporates the necessary prompt format. The preset is also available in this [gist](https://gist.github.com/beowolx/b219466681c02ff67baf8f313a3ad817).
|
56 |
|
57 |
+
### Using the Transformers Library
|
58 |
|
59 |
```python
|
60 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
63 |
# Initialize the model
|
64 |
model_path = "beowolx/CodeNinja-1.0-OpenChat-7B"
|
65 |
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
|
66 |
+
# Load the OpenChat tokenizer
|
67 |
tokenizer = AutoTokenizer.from_pretrained("openchat/openchat-3.5-1210", use_fast=True)
|
68 |
|
69 |
def generate_one_completion(prompt: str):
|
70 |
messages = [
|
71 |
{"role": "user", "content": prompt},
|
72 |
+
{"role": "assistant", "content": ""} # Model response placeholder
|
73 |
]
|
74 |
|
75 |
+
# Generate token IDs using the chat template
|
76 |
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
|
77 |
|
78 |
+
# Produce completion
|
79 |
generate_ids = model.generate(
|
80 |
+
torch.tensor([input_ids]).to("cuda"),
|
81 |
max_length=256,
|
82 |
pad_token_id=tokenizer.pad_token_id,
|
83 |
eos_token_id=tokenizer.eos_token_id
|
84 |
)
|
85 |
|
86 |
+
# Process the completion
|
87 |
completion = tokenizer.decode(generate_ids[0], skip_special_tokens=True)
|
88 |
completion = completion.split("\n\n\n")[0].strip()
|
89 |
|
90 |
return completion
|
91 |
```
|
92 |
|
93 |
+
## License
|
94 |
+
CodeNinja is licensed under the MIT License, with model usage subject to the Model License.
|
|
|
|
|
|
|
95 |
|
96 |
+
## Contact
|
97 |
+
For queries or support, please open an issue in the repository.
|