daniellnichols commited on
Commit
eaf4335
1 Parent(s): 4562eaf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -21,9 +21,10 @@ This version is a fine-tuning of the [Deepseek Coder 6.7b](https://huggingface.c
21
  It is fine-tuned on the [hpc-instruct](https://huggingface.co/datasets/hpcgroup/hpc-instruct), [oss-instruct](https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K), and [evol-instruct](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) datasets.
22
  We utilized the distributed training library [AxoNN](https://github.com/axonn-ai/axonn) to fine-tune in parallel across many GPUs.
23
 
24
- [HPC-Coder-v2-1.3b](https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b) and HPC-Coder-v2-6.7b are two of the most capable open-source LLMs for parallel and HPC code generation.
25
- HPC-Coder-v2-6.7b is the best performing LLM under 30b parameters on the [ParEval](https://github.com/parallelcodefoundry/ParEval) parallel code generation benchmark in terms of _correctness_ and _performance_.
26
  It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation.
 
27
 
28
  ## Using HPC-Coder-v2
29
 
 
21
  It is fine-tuned on the [hpc-instruct](https://huggingface.co/datasets/hpcgroup/hpc-instruct), [oss-instruct](https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K), and [evol-instruct](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) datasets.
22
  We utilized the distributed training library [AxoNN](https://github.com/axonn-ai/axonn) to fine-tune in parallel across many GPUs.
23
 
24
+ [HPC-Coder-v2-1.3b](https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b), [HPC-Coder-v2-6.7b](https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b), and [HPC-Coder-v2-16b](https://huggingface.co/hpcgroup/hpc-coder-v2-16b) are the most capable open-source LLMs for parallel and HPC code generation.
25
+ HPC-Coder-v2-16b is currently the best performing open-source LLM on the [ParEval](https://github.com/parallelcodefoundry/ParEval) parallel code generation benchmark in terms of _correctness_ and _performance_.
26
  It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation.
27
+ HPC-Coder-v2-6.7b is not far behind the 16b in terms of performance.
28
 
29
  ## Using HPC-Coder-v2
30