stephenlzc's picture
Update README.md
bc8e921 verified
|
raw
history blame
2.61 kB
metadata
base_model: shenzhi-wang/Gemma-2-9B-Chinese-Chat
datasets:
  - V3N0M/Jenna-50K-Alpaca-Uncensored
language:
  - zh
  - en
license: mit
pipeline_tag: text-generation
tags:
  - text-generation-inference
  - code
  - unsloth
  - uncensored
  - finetune
task_categories:
  - conversational
widget:
  - text: >-
      Is this review positive or negative? Review: Best cast iron skillet you
      will ever buy.
    example_title: Sentiment analysis
  - text: >-
      Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
      He chose her because she had ...
    example_title: Coreference resolution
  - text: >-
      On a shelf, there are five books: a gray book, a red book, a purple book,
      a blue book, and a black book ...
    example_title: Logic puzzles
  - text: >-
      The two men running to become New York City's next mayor will face off in
      their first debate Wednesday night ...
    example_title: Reading comprehension

Model Details

Model Description

  • Using shenzhi-wang/Gemma-2-9B-Chinese-Chat as base model, and finetune the dataset as mentioned via unsloth. Makes the model uncensored.

Training Code and Log

Training Procedure Raw Files

  • ALL the procedure are training on Runpod.io

  • Hardware in Vast.ai:

    • GPU: 1 x A100 SXM 80G

    • CPU: 16vCPU

    • RAM: 251 GB

    • Disk Space To Allocate:>150GB

    • Docker Image: runpod/pytorch:2.2.0-py3.10-cuda12.1.1-devel-ubuntu22.04

Training Data

Usage

from transformers import pipeline

qa_model = pipeline("question-answering", model='stephenlzc/Gemma-2-9B-Chinese-Chat-Uncensored')
question = "How to make girlfreind laugh? please answer in Chinese."
qa_model(question = question)