Edit model card

AMKCode/gemma-2-2b-jpn-it-q4f32_1-MLC

This model was compiled using MLC-LLM with q4f32_1 quantization from google/gemma-2-2b-jpn-it. The conversion was done using the MLC-Weight-Conversion space.

To run this model, please first install MLC-LLM.

To chat with the model on your terminal:

mlc_llm chat HF://AMKCode/gemma-2-2b-jpn-it-q4f32_1-MLC

For more information on how to use MLC-LLM, please visit the MLC-LLM documentation.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AMKCode/gemma-2-2b-jpn-it-q4f32_1-MLC

Base model

google/gemma-2-2b
Finetuned
(14)
this model