Quantized_Models aki-008/Mistral-7B-Instruct-v0.3quant4bit Text Generation • 7B • Updated Aug 29, 2024 • 2 aki-008/Mistral-7B-Instruct-v0.3_4bit_GPTQ Text Generation • 7B • Updated Aug 30, 2024 • 7 aki-008/Meta-Llama-3.1-8B-quant4bit Text Generation • 8B • Updated Aug 29, 2024 aki-008/Mistral-7B-Instruct-v0.3_4bit_AWQ Text Generation • 7B • Updated Aug 31, 2024 • 6
Fine tune Datasets iamtarun/python_code_instructions_18k_alpaca Viewer • Updated Jul 27, 2023 • 18.6k • 10.6k • 333
Quantized_Models aki-008/Mistral-7B-Instruct-v0.3quant4bit Text Generation • 7B • Updated Aug 29, 2024 • 2 aki-008/Mistral-7B-Instruct-v0.3_4bit_GPTQ Text Generation • 7B • Updated Aug 30, 2024 • 7 aki-008/Meta-Llama-3.1-8B-quant4bit Text Generation • 8B • Updated Aug 29, 2024 aki-008/Mistral-7B-Instruct-v0.3_4bit_AWQ Text Generation • 7B • Updated Aug 31, 2024 • 6
Fine tune Datasets iamtarun/python_code_instructions_18k_alpaca Viewer • Updated Jul 27, 2023 • 18.6k • 10.6k • 333