Upload folder using huggingface_hub (#1)
Browse files- 0514a3e328794675c8b5b4c43664bb23b7a4ab1e25f46258aef435b3f96486c3 (333d704bf14550fe32ae8d27a18df2d2eb1c3954)
- 817ae1aa720fa264790512a00998572ddff924791f1875c9a97c45f6561c0ad6 (9f38c18a698163984644607aa748932460534721)
- f5a79c5c8ac4ef725adde18f0414c1206bc3b358d7b0b320b1a6c26916c02cba (f76496e28cdb0b8d7f370c3b03f09ff1dfdf1977)
- e76266efdae7ccc57e89343e9057bd51a22ea5c37a33a708c377fe6fd26f2c2a (b06b1cbef438cc7495593ae188d8c0a8254efb2b)
- 9b19d3211723d2412dedc7aad699b63b2df8c053c4edab7c769dd7f0688f8e88 (1a997845f08d4d55ff4451b2b8318dd5b4fdfe7f)
- 479493cfafba497c28fe04e0766e98423ebcd5971e94ab23b2a17ffd71c68b28 (79064102ef39eec6d2d71546be094f35867729c7)
- .gitattributes +6 -0
- Qwen2.5-7B-Instruct-Uncensored-GGUF_imatrix.dat +3 -0
- Qwen2.5-7B-Instruct-Uncensored.Q5_K_M.gguf +3 -0
- Qwen2.5-7B-Instruct-Uncensored.Q5_K_S.gguf +3 -0
- Qwen2.5-7B-Instruct-Uncensored.Q6_K.gguf +3 -0
- Qwen2.5-7B-Instruct-Uncensored.Q8_0.gguf +3 -0
- Qwen2.5-7B-Instruct-Uncensored.fp16.gguf +3 -0
- README.md +46 -0
    	
        .gitattributes
    CHANGED
    
    | @@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text | |
| 33 | 
             
            *.zip filter=lfs diff=lfs merge=lfs -text
         | 
| 34 | 
             
            *.zst filter=lfs diff=lfs merge=lfs -text
         | 
| 35 | 
             
            *tfevents* filter=lfs diff=lfs merge=lfs -text
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 33 | 
             
            *.zip filter=lfs diff=lfs merge=lfs -text
         | 
| 34 | 
             
            *.zst filter=lfs diff=lfs merge=lfs -text
         | 
| 35 | 
             
            *tfevents* filter=lfs diff=lfs merge=lfs -text
         | 
| 36 | 
            +
            Qwen2.5-7B-Instruct-Uncensored.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
         | 
| 37 | 
            +
            Qwen2.5-7B-Instruct-Uncensored.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
         | 
| 38 | 
            +
            Qwen2.5-7B-Instruct-Uncensored.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
         | 
| 39 | 
            +
            Qwen2.5-7B-Instruct-Uncensored.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
         | 
| 40 | 
            +
            Qwen2.5-7B-Instruct-Uncensored.fp16.gguf filter=lfs diff=lfs merge=lfs -text
         | 
| 41 | 
            +
            Qwen2.5-7B-Instruct-Uncensored-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
         | 
    	
        Qwen2.5-7B-Instruct-Uncensored-GGUF_imatrix.dat
    ADDED
    
    | @@ -0,0 +1,3 @@ | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:91787af03e07d6fa8eea1080624be62776f39a3d2170c972bd56e4490f851548
         | 
| 3 | 
            +
            size 4536654
         | 
    	
        Qwen2.5-7B-Instruct-Uncensored.Q5_K_M.gguf
    ADDED
    
    | @@ -0,0 +1,3 @@ | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:7fb768695ea4285d550e38a17da2afc6921f9b1175a75637caafc30fc3b33ffb
         | 
| 3 | 
            +
            size 5444830240
         | 
    	
        Qwen2.5-7B-Instruct-Uncensored.Q5_K_S.gguf
    ADDED
    
    | @@ -0,0 +1,3 @@ | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:591d71ff5a63091409aeab7e7ee2a569c898ee60d9526a0644f95a6c3cb922bb
         | 
| 3 | 
            +
            size 5315175456
         | 
    	
        Qwen2.5-7B-Instruct-Uncensored.Q6_K.gguf
    ADDED
    
    | @@ -0,0 +1,3 @@ | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:32b82935498879475634ac3f73b44252a45fc617f3b3e4ee23c05c414d5239c0
         | 
| 3 | 
            +
            size 6254197792
         | 
    	
        Qwen2.5-7B-Instruct-Uncensored.Q8_0.gguf
    ADDED
    
    | @@ -0,0 +1,3 @@ | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:9549f2e652b9682da5faad7b7010eee8269420f99e0d79a0bf1d2ba7ba45efc2
         | 
| 3 | 
            +
            size 8098524192
         | 
    	
        Qwen2.5-7B-Instruct-Uncensored.fp16.gguf
    ADDED
    
    | @@ -0,0 +1,3 @@ | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:7f54aca0961a6cd7e65325dc5adb336fe3c7ab66195942bc8cb64dfa290c2dcd
         | 
| 3 | 
            +
            size 15237851968
         | 
    	
        README.md
    ADDED
    
    | @@ -0,0 +1,46 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            ---
         | 
| 2 | 
            +
            tags:
         | 
| 3 | 
            +
            - quantized
         | 
| 4 | 
            +
            - 2-bit
         | 
| 5 | 
            +
            - 3-bit
         | 
| 6 | 
            +
            - 4-bit
         | 
| 7 | 
            +
            - 5-bit
         | 
| 8 | 
            +
            - 6-bit
         | 
| 9 | 
            +
            - 8-bit
         | 
| 10 | 
            +
            - GGUF
         | 
| 11 | 
            +
            - text-generation
         | 
| 12 | 
            +
            - text-generation
         | 
| 13 | 
            +
            model_name: Qwen2.5-7B-Instruct-Uncensored-GGUF
         | 
| 14 | 
            +
            base_model: Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
         | 
| 15 | 
            +
            inference: false
         | 
| 16 | 
            +
            model_creator: Orion-zhen
         | 
| 17 | 
            +
            pipeline_tag: text-generation
         | 
| 18 | 
            +
            quantized_by: MaziyarPanahi
         | 
| 19 | 
            +
            ---
         | 
| 20 | 
            +
            # [MaziyarPanahi/Qwen2.5-7B-Instruct-Uncensored-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-7B-Instruct-Uncensored-GGUF)
         | 
| 21 | 
            +
            - Model creator: [Orion-zhen](https://huggingface.co/Orion-zhen)
         | 
| 22 | 
            +
            - Original model: [Orion-zhen/Qwen2.5-7B-Instruct-Uncensored](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Instruct-Uncensored)
         | 
| 23 | 
            +
             | 
| 24 | 
            +
            ## Description
         | 
| 25 | 
            +
            [MaziyarPanahi/Qwen2.5-7B-Instruct-Uncensored-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-7B-Instruct-Uncensored-GGUF) contains GGUF format model files for [Orion-zhen/Qwen2.5-7B-Instruct-Uncensored](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Instruct-Uncensored).
         | 
| 26 | 
            +
             | 
| 27 | 
            +
            ### About GGUF
         | 
| 28 | 
            +
             | 
| 29 | 
            +
            GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
         | 
| 30 | 
            +
             | 
| 31 | 
            +
            Here is an incomplete list of clients and libraries that are known to support GGUF:
         | 
| 32 | 
            +
             | 
| 33 | 
            +
            * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
         | 
| 34 | 
            +
            * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
         | 
| 35 | 
            +
            * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
         | 
| 36 | 
            +
            * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
         | 
| 37 | 
            +
            * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
         | 
| 38 | 
            +
            * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
         | 
| 39 | 
            +
            * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
         | 
| 40 | 
            +
            * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
         | 
| 41 | 
            +
            * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
         | 
| 42 | 
            +
            * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
         | 
| 43 | 
            +
             | 
| 44 | 
            +
            ## Special thanks
         | 
| 45 | 
            +
             | 
| 46 | 
            +
            🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
         | 
