Model Card for Model ID

This model can be used to convert Japanese IPA back to normal text.

Upgraded to a more powerful model (3B), with a larger dataset and trained for more steps.

This model should only be used for the specific IPA conversion that's generated by Hibiki ASR. so this may not have the optimal result on any IPA text.

Model

Usage

in the terminal:

python -m vllm.entrypoints.openai.api_server --model Respair/Japanese_Phoneme_to_Grapheme_LLM--port 8000

now you can simply use it:


# pip install vllm

from openai import OpenAI


openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

model_name = "Respair/Japanese_Phoneme_to_Grapheme_LLM"


def p2g(param):

    chat_response = client.chat.completions.create(

        model=model_name,
        max_tokens=512,
        temperature=0.01,

        messages=[
            
            {"role": "user", "content": f"{prompt}"}]
    )   

    return chat_response.choices[0].message.content


prompt= f"""convert this pronunciation back to normal japanese: geɴ'iɴ?  sonna  fɯɯ ni ɕiɽoi hebi no geŋkakɯ ga, omae no ɕɯɯi ni naɴ do mo naɴ do mo aɽawaɽerɯ, kiʔkake na no ka naɴ na no ka? mi ni oboeʔtsɯ no ka? """

result= p2g(prompt)

print(result)

...or simply through HF transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "Respair/Japanese_Phoneme_to_Grapheme_LLM",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Respair/Japanese_Phoneme_to_Grapheme_LLM")


tokenizer.pad_token = "<|endoftext|>"
tokenizer.bos_token = "<|endoftext|>"
tokenizer.eos_token = "<|im_end|>"
prompt = "convert this pronunciation back to normal japanese: geɴ'iɴ?  sonna  fɯɯ ni ɕiɽoi hebi no geŋkakɯ ga, omae no ɕɯɯi ni naɴ do mo naɴ do mo aɽawaɽerɯ, kiʔkake na no ka naɴ na no ka? mi ni oboeʔtsɯ no ka?"
# or anyother prompts, this model was trained on an instruction tuned dataset, so it should be a bit robust to variations in prompt


messages = [

    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,

    pad_token_id=tokenizer.pad_token_id,
    bos_token_id=tokenizer.bos_token_id,
    eos_token_id=tokenizer.eos_token_id,

    temperature=0.1,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
response
Downloads last month
19
Safetensors
Model size
3.09B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.