itlwas
		
	commited on
		
		
					Upload README.md with huggingface_hub
Browse files
    	
        README.md
    ADDED
    
    | @@ -0,0 +1,61 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            ---
         | 
| 2 | 
            +
            license: other
         | 
| 3 | 
            +
            license_name: qwen-research
         | 
| 4 | 
            +
            license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B/blob/main/LICENSE
         | 
| 5 | 
            +
            language:
         | 
| 6 | 
            +
            - en
         | 
| 7 | 
            +
            base_model: Qwen/Qwen2.5-Coder-3B
         | 
| 8 | 
            +
            pipeline_tag: text-generation
         | 
| 9 | 
            +
            library_name: transformers
         | 
| 10 | 
            +
            tags:
         | 
| 11 | 
            +
            - code
         | 
| 12 | 
            +
            - qwen
         | 
| 13 | 
            +
            - qwen-coder
         | 
| 14 | 
            +
            - codeqwen
         | 
| 15 | 
            +
            - llama-cpp
         | 
| 16 | 
            +
            - gguf-my-repo
         | 
| 17 | 
            +
            ---
         | 
| 18 | 
            +
             | 
| 19 | 
            +
            # AIronMind/Qwen2.5-Coder-3B-Q4_K_M-GGUF
         | 
| 20 | 
            +
            This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-3B`](https://huggingface.co/Qwen/Qwen2.5-Coder-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
         | 
| 21 | 
            +
            Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-3B) for more details on the model.
         | 
| 22 | 
            +
             | 
| 23 | 
            +
            ## Use with llama.cpp
         | 
| 24 | 
            +
            Install llama.cpp through brew (works on Mac and Linux)
         | 
| 25 | 
            +
             | 
| 26 | 
            +
            ```bash
         | 
| 27 | 
            +
            brew install llama.cpp
         | 
| 28 | 
            +
             | 
| 29 | 
            +
            ```
         | 
| 30 | 
            +
            Invoke the llama.cpp server or the CLI.
         | 
| 31 | 
            +
             | 
| 32 | 
            +
            ### CLI:
         | 
| 33 | 
            +
            ```bash
         | 
| 34 | 
            +
            llama-cli --hf-repo AIronMind/Qwen2.5-Coder-3B-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
         | 
| 35 | 
            +
            ```
         | 
| 36 | 
            +
             | 
| 37 | 
            +
            ### Server:
         | 
| 38 | 
            +
            ```bash
         | 
| 39 | 
            +
            llama-server --hf-repo AIronMind/Qwen2.5-Coder-3B-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-q4_k_m.gguf -c 2048
         | 
| 40 | 
            +
            ```
         | 
| 41 | 
            +
             | 
| 42 | 
            +
            Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
         | 
| 43 | 
            +
             | 
| 44 | 
            +
            Step 1: Clone llama.cpp from GitHub.
         | 
| 45 | 
            +
            ```
         | 
| 46 | 
            +
            git clone https://github.com/ggerganov/llama.cpp
         | 
| 47 | 
            +
            ```
         | 
| 48 | 
            +
             | 
| 49 | 
            +
            Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
         | 
| 50 | 
            +
            ```
         | 
| 51 | 
            +
            cd llama.cpp && LLAMA_CURL=1 make
         | 
| 52 | 
            +
            ```
         | 
| 53 | 
            +
             | 
| 54 | 
            +
            Step 3: Run inference through the main binary.
         | 
| 55 | 
            +
            ```
         | 
| 56 | 
            +
            ./llama-cli --hf-repo AIronMind/Qwen2.5-Coder-3B-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
         | 
| 57 | 
            +
            ```
         | 
| 58 | 
            +
            or 
         | 
| 59 | 
            +
            ```
         | 
| 60 | 
            +
            ./llama-server --hf-repo AIronMind/Qwen2.5-Coder-3B-Q4_K_M-GGUF --hf-file qwen2.5-coder-3b-q4_k_m.gguf -c 2048
         | 
| 61 | 
            +
            ```
         | 
